00:00:00.001 Started by upstream project "autotest-per-patch" build number 126191 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23954 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.103 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/71/24171/2 # timeout=5 00:00:06.753 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.765 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.804 Checking out Revision 1e4055c0ee28da4fa0007a72f92a6499a45bf65d (FETCH_HEAD) 00:00:06.804 > git config core.sparsecheckout # timeout=10 00:00:06.816 > git read-tree -mu HEAD # timeout=10 00:00:06.837 > git checkout -f 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=5 00:00:06.867 Commit message: "packer: Drop centos7" 00:00:06.868 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.969 [Pipeline] Start of Pipeline 00:00:06.984 [Pipeline] library 00:00:06.986 Loading library shm_lib@master 00:00:06.986 Library shm_lib@master is cached. Copying from home. 00:00:07.005 [Pipeline] node 00:00:07.018 Running on CYP12 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.020 [Pipeline] { 00:00:07.034 [Pipeline] catchError 00:00:07.036 [Pipeline] { 00:00:07.048 [Pipeline] wrap 00:00:07.056 [Pipeline] { 00:00:07.064 [Pipeline] stage 00:00:07.065 [Pipeline] { (Prologue) 00:00:07.323 [Pipeline] sh 00:00:07.608 + logger -p user.info -t JENKINS-CI 00:00:07.631 [Pipeline] echo 00:00:07.633 Node: CYP12 00:00:07.640 [Pipeline] sh 00:00:07.947 [Pipeline] setCustomBuildProperty 00:00:07.957 [Pipeline] echo 00:00:07.959 Cleanup processes 00:00:07.965 [Pipeline] sh 00:00:08.254 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.254 1487882 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.274 [Pipeline] sh 00:00:08.568 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.568 ++ grep -v 'sudo pgrep' 00:00:08.568 ++ awk '{print $1}' 00:00:08.568 + sudo kill -9 00:00:08.568 + true 00:00:08.583 [Pipeline] cleanWs 00:00:08.612 [WS-CLEANUP] Deleting project workspace... 00:00:08.612 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.621 [WS-CLEANUP] done 00:00:08.626 [Pipeline] setCustomBuildProperty 00:00:08.642 [Pipeline] sh 00:00:08.926 + sudo git config --global --replace-all safe.directory '*' 00:00:09.008 [Pipeline] httpRequest 00:00:09.041 [Pipeline] echo 00:00:09.042 Sorcerer 10.211.164.101 is alive 00:00:09.048 [Pipeline] httpRequest 00:00:09.052 HttpMethod: GET 00:00:09.053 URL: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:09.053 Sending request to url: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:09.079 Response Code: HTTP/1.1 200 OK 00:00:09.080 Success: Status code 200 is in the accepted range: 200,404 00:00:09.080 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:21.212 [Pipeline] sh 00:00:21.528 + tar --no-same-owner -xf jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:21.548 [Pipeline] httpRequest 00:00:21.579 [Pipeline] echo 00:00:21.581 Sorcerer 10.211.164.101 is alive 00:00:21.590 [Pipeline] httpRequest 00:00:21.596 HttpMethod: GET 00:00:21.597 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:21.597 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:21.617 Response Code: HTTP/1.1 200 OK 00:00:21.618 Success: Status code 200 is in the accepted range: 200,404 00:00:21.619 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:14.613 [Pipeline] sh 00:01:14.903 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:18.219 [Pipeline] sh 00:01:18.502 + git -C spdk log --oneline -n5 00:01:18.502 2728651ee accel: adjust task per ch define name 00:01:18.502 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:18.502 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:18.502 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:18.502 719d03c6a sock/uring: only register net impl if supported 00:01:18.514 [Pipeline] } 00:01:18.527 [Pipeline] // stage 00:01:18.534 [Pipeline] stage 00:01:18.535 [Pipeline] { (Prepare) 00:01:18.551 [Pipeline] writeFile 00:01:18.567 [Pipeline] sh 00:01:18.851 + logger -p user.info -t JENKINS-CI 00:01:18.864 [Pipeline] sh 00:01:19.145 + logger -p user.info -t JENKINS-CI 00:01:19.159 [Pipeline] sh 00:01:19.444 + cat autorun-spdk.conf 00:01:19.444 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.444 SPDK_TEST_NVMF=1 00:01:19.444 SPDK_TEST_NVME_CLI=1 00:01:19.444 SPDK_TEST_NVMF_NICS=mlx5 00:01:19.444 SPDK_RUN_UBSAN=1 00:01:19.444 NET_TYPE=phy 00:01:19.452 RUN_NIGHTLY=0 00:01:19.458 [Pipeline] readFile 00:01:19.488 [Pipeline] withEnv 00:01:19.489 [Pipeline] { 00:01:19.504 [Pipeline] sh 00:01:19.848 + set -ex 00:01:19.848 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:19.848 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.848 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.848 ++ SPDK_TEST_NVMF=1 00:01:19.848 ++ SPDK_TEST_NVME_CLI=1 00:01:19.848 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:19.848 ++ SPDK_RUN_UBSAN=1 00:01:19.848 ++ NET_TYPE=phy 00:01:19.848 ++ RUN_NIGHTLY=0 00:01:19.848 + case $SPDK_TEST_NVMF_NICS in 00:01:19.848 + DRIVERS=mlx5_ib 00:01:19.848 + [[ -n mlx5_ib ]] 00:01:19.848 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.848 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:29.847 rmmod: ERROR: Module irdma is not currently loaded 00:01:29.847 rmmod: ERROR: Module i40iw is not currently loaded 00:01:29.847 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:29.847 + true 00:01:29.847 + for D in $DRIVERS 00:01:29.847 + sudo modprobe mlx5_ib 00:01:29.847 + exit 0 00:01:29.858 [Pipeline] } 00:01:29.873 [Pipeline] // withEnv 00:01:29.878 [Pipeline] } 00:01:29.894 [Pipeline] // stage 00:01:29.905 [Pipeline] catchError 00:01:29.907 [Pipeline] { 00:01:29.923 [Pipeline] timeout 00:01:29.923 Timeout set to expire in 1 hr 0 min 00:01:29.925 [Pipeline] { 00:01:29.941 [Pipeline] stage 00:01:29.943 [Pipeline] { (Tests) 00:01:29.955 [Pipeline] sh 00:01:30.242 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:30.242 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:30.242 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:30.242 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:30.242 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:30.242 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:30.242 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:30.242 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:30.242 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:30.242 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:30.242 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:30.242 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:30.242 + source /etc/os-release 00:01:30.242 ++ NAME='Fedora Linux' 00:01:30.242 ++ VERSION='38 (Cloud Edition)' 00:01:30.242 ++ ID=fedora 00:01:30.242 ++ VERSION_ID=38 00:01:30.242 ++ VERSION_CODENAME= 00:01:30.242 ++ PLATFORM_ID=platform:f38 00:01:30.242 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:30.242 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.242 ++ LOGO=fedora-logo-icon 00:01:30.242 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:30.242 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.242 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:30.242 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.242 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.242 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.242 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:30.242 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.242 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:30.242 ++ SUPPORT_END=2024-05-14 00:01:30.242 ++ VARIANT='Cloud Edition' 00:01:30.242 ++ VARIANT_ID=cloud 00:01:30.242 + uname -a 00:01:30.242 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:30.242 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:33.550 Hugepages 00:01:33.550 node hugesize free / total 00:01:33.550 node0 1048576kB 0 / 0 00:01:33.550 node0 2048kB 0 / 0 00:01:33.550 node1 1048576kB 0 / 0 00:01:33.550 node1 2048kB 0 / 0 00:01:33.550 00:01:33.550 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:33.550 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:33.550 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:33.550 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:33.550 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:33.550 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:33.550 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:33.550 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:33.550 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:33.838 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:33.838 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:33.838 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:33.838 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:33.838 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:33.838 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:33.838 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:33.838 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:33.838 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:33.838 + rm -f /tmp/spdk-ld-path 00:01:33.838 + source autorun-spdk.conf 00:01:33.838 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.838 ++ SPDK_TEST_NVMF=1 00:01:33.838 ++ SPDK_TEST_NVME_CLI=1 00:01:33.838 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:33.838 ++ SPDK_RUN_UBSAN=1 00:01:33.838 ++ NET_TYPE=phy 00:01:33.838 ++ RUN_NIGHTLY=0 00:01:33.838 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:33.838 + [[ -n '' ]] 00:01:33.838 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:33.838 + for M in /var/spdk/build-*-manifest.txt 00:01:33.838 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:33.838 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:33.838 + for M in /var/spdk/build-*-manifest.txt 00:01:33.838 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:33.838 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:33.838 ++ uname 00:01:33.838 + [[ Linux == \L\i\n\u\x ]] 00:01:33.838 + sudo dmesg -T 00:01:33.838 + sudo dmesg --clear 00:01:33.838 + dmesg_pid=1488997 00:01:33.838 + [[ Fedora Linux == FreeBSD ]] 00:01:33.838 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.838 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:33.838 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:33.838 + [[ -x /usr/src/fio-static/fio ]] 00:01:33.838 + sudo dmesg -Tw 00:01:33.838 + export FIO_BIN=/usr/src/fio-static/fio 00:01:33.838 + FIO_BIN=/usr/src/fio-static/fio 00:01:33.838 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:33.838 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:33.838 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:33.838 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.838 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:33.838 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:33.838 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.838 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:33.838 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:33.838 Test configuration: 00:01:33.838 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.838 SPDK_TEST_NVMF=1 00:01:33.838 SPDK_TEST_NVME_CLI=1 00:01:33.838 SPDK_TEST_NVMF_NICS=mlx5 00:01:33.838 SPDK_RUN_UBSAN=1 00:01:33.838 NET_TYPE=phy 00:01:34.099 RUN_NIGHTLY=0 14:43:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:34.099 14:43:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:34.099 14:43:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:34.099 14:43:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:34.099 14:43:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.099 14:43:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.099 14:43:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.099 14:43:49 -- paths/export.sh@5 -- $ export PATH 00:01:34.099 14:43:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.099 14:43:49 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:34.099 14:43:49 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:34.099 14:43:49 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721047429.XXXXXX 00:01:34.099 14:43:49 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721047429.fxlrTZ 00:01:34.099 14:43:49 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:34.099 14:43:49 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:34.099 14:43:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:34.099 14:43:49 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:34.099 14:43:49 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:34.099 14:43:49 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:34.099 14:43:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:34.099 14:43:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.099 14:43:49 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:34.099 14:43:49 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:34.099 14:43:49 -- pm/common@17 -- $ local monitor 00:01:34.099 14:43:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.099 14:43:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.099 14:43:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.099 14:43:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.099 14:43:49 -- pm/common@21 -- $ date +%s 00:01:34.099 14:43:49 -- pm/common@25 -- $ sleep 1 00:01:34.099 14:43:49 -- pm/common@21 -- $ date +%s 00:01:34.099 14:43:49 -- pm/common@21 -- $ date +%s 00:01:34.099 14:43:49 -- pm/common@21 -- $ date +%s 00:01:34.099 14:43:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047429 00:01:34.099 14:43:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047429 00:01:34.099 14:43:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047429 00:01:34.099 14:43:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047429 00:01:34.099 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047429_collect-vmstat.pm.log 00:01:34.099 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047429_collect-cpu-load.pm.log 00:01:34.099 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047429_collect-cpu-temp.pm.log 00:01:34.099 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047429_collect-bmc-pm.bmc.pm.log 00:01:35.039 14:43:50 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:35.039 14:43:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:35.039 14:43:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:35.039 14:43:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:35.039 14:43:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:35.039 Mon Jul 15 12:43:50 PM UTC 2024 00:01:35.039 14:43:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:35.039 v24.09-pre-206-g2728651ee 00:01:35.039 14:43:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:35.039 14:43:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:35.039 14:43:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:35.039 14:43:50 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:35.039 14:43:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:35.039 14:43:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.039 ************************************ 00:01:35.039 START TEST ubsan 00:01:35.039 ************************************ 00:01:35.039 14:43:51 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:35.039 using ubsan 00:01:35.039 00:01:35.039 real 0m0.001s 00:01:35.039 user 0m0.000s 00:01:35.039 sys 0m0.000s 00:01:35.039 14:43:51 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:35.039 14:43:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:35.039 ************************************ 00:01:35.039 END TEST ubsan 00:01:35.039 ************************************ 00:01:35.039 14:43:51 -- common/autotest_common.sh@1142 -- $ return 0 00:01:35.039 14:43:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:35.039 14:43:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:35.039 14:43:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:35.039 14:43:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:35.039 14:43:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:35.039 14:43:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:35.039 14:43:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:35.039 14:43:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:35.039 14:43:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:35.299 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:35.299 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:35.558 Using 'verbs' RDMA provider 00:01:51.391 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:03.615 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:03.615 Creating mk/config.mk...done. 00:02:03.615 Creating mk/cc.flags.mk...done. 00:02:03.615 Type 'make' to build. 00:02:03.615 14:44:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:03.615 14:44:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:03.615 14:44:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:03.615 14:44:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.615 ************************************ 00:02:03.615 START TEST make 00:02:03.615 ************************************ 00:02:03.615 14:44:18 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:03.615 make[1]: Nothing to be done for 'all'. 00:02:11.745 The Meson build system 00:02:11.745 Version: 1.3.1 00:02:11.745 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:11.745 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:11.745 Build type: native build 00:02:11.745 Program cat found: YES (/usr/bin/cat) 00:02:11.745 Project name: DPDK 00:02:11.745 Project version: 24.03.0 00:02:11.745 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:11.745 C linker for the host machine: cc ld.bfd 2.39-16 00:02:11.745 Host machine cpu family: x86_64 00:02:11.745 Host machine cpu: x86_64 00:02:11.745 Message: ## Building in Developer Mode ## 00:02:11.745 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.745 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.745 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.745 Program python3 found: YES (/usr/bin/python3) 00:02:11.745 Program cat found: YES (/usr/bin/cat) 00:02:11.745 Compiler for C supports arguments -march=native: YES 00:02:11.745 Checking for size of "void *" : 8 00:02:11.745 Checking for size of "void *" : 8 (cached) 00:02:11.745 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:11.745 Library m found: YES 00:02:11.745 Library numa found: YES 00:02:11.745 Has header "numaif.h" : YES 00:02:11.745 Library fdt found: NO 00:02:11.745 Library execinfo found: NO 00:02:11.745 Has header "execinfo.h" : YES 00:02:11.745 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:11.745 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.745 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.745 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.745 Run-time dependency openssl found: YES 3.0.9 00:02:11.745 Run-time dependency libpcap found: YES 1.10.4 00:02:11.745 Has header "pcap.h" with dependency libpcap: YES 00:02:11.745 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.745 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.745 Compiler for C supports arguments -Wformat: YES 00:02:11.745 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.745 Compiler for C supports arguments -Wformat-security: NO 00:02:11.745 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.745 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.745 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.746 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.746 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.746 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.746 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.746 Compiler for C supports arguments -Wundef: YES 00:02:11.746 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.746 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.746 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.746 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.746 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.746 Program objdump found: YES (/usr/bin/objdump) 00:02:11.746 Compiler for C supports arguments -mavx512f: YES 00:02:11.746 Checking if "AVX512 checking" compiles: YES 00:02:11.746 Fetching value of define "__SSE4_2__" : 1 00:02:11.746 Fetching value of define "__AES__" : 1 00:02:11.746 Fetching value of define "__AVX__" : 1 00:02:11.746 Fetching value of define "__AVX2__" : 1 00:02:11.746 Fetching value of define "__AVX512BW__" : 1 00:02:11.746 Fetching value of define "__AVX512CD__" : 1 00:02:11.746 Fetching value of define "__AVX512DQ__" : 1 00:02:11.746 Fetching value of define "__AVX512F__" : 1 00:02:11.746 Fetching value of define "__AVX512VL__" : 1 00:02:11.746 Fetching value of define "__PCLMUL__" : 1 00:02:11.746 Fetching value of define "__RDRND__" : 1 00:02:11.746 Fetching value of define "__RDSEED__" : 1 00:02:11.746 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:11.746 Fetching value of define "__znver1__" : (undefined) 00:02:11.746 Fetching value of define "__znver2__" : (undefined) 00:02:11.746 Fetching value of define "__znver3__" : (undefined) 00:02:11.746 Fetching value of define "__znver4__" : (undefined) 00:02:11.746 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.746 Message: lib/log: Defining dependency "log" 00:02:11.746 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.746 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.746 Checking for function "getentropy" : NO 00:02:11.746 Message: lib/eal: Defining dependency "eal" 00:02:11.746 Message: lib/ring: Defining dependency "ring" 00:02:11.746 Message: lib/rcu: Defining dependency "rcu" 00:02:11.746 Message: lib/mempool: Defining dependency "mempool" 00:02:11.746 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.746 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.746 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.746 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.746 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.746 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.746 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:11.746 Compiler for C supports arguments -mpclmul: YES 00:02:11.746 Compiler for C supports arguments -maes: YES 00:02:11.746 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.746 Compiler for C supports arguments -mavx512bw: YES 00:02:11.746 Compiler for C supports arguments -mavx512dq: YES 00:02:11.746 Compiler for C supports arguments -mavx512vl: YES 00:02:11.746 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.746 Compiler for C supports arguments -mavx2: YES 00:02:11.746 Compiler for C supports arguments -mavx: YES 00:02:11.746 Message: lib/net: Defining dependency "net" 00:02:11.746 Message: lib/meter: Defining dependency "meter" 00:02:11.746 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.746 Message: lib/pci: Defining dependency "pci" 00:02:11.746 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.746 Message: lib/hash: Defining dependency "hash" 00:02:11.746 Message: lib/timer: Defining dependency "timer" 00:02:11.746 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.746 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.746 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.746 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.746 Message: lib/power: Defining dependency "power" 00:02:11.746 Message: lib/reorder: Defining dependency "reorder" 00:02:11.746 Message: lib/security: Defining dependency "security" 00:02:11.746 Has header "linux/userfaultfd.h" : YES 00:02:11.746 Has header "linux/vduse.h" : YES 00:02:11.746 Message: lib/vhost: Defining dependency "vhost" 00:02:11.746 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.746 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.746 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.746 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.746 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.746 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.746 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.746 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.746 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.746 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.746 Program doxygen found: YES (/usr/bin/doxygen) 00:02:11.746 Configuring doxy-api-html.conf using configuration 00:02:11.746 Configuring doxy-api-man.conf using configuration 00:02:11.746 Program mandb found: YES (/usr/bin/mandb) 00:02:11.746 Program sphinx-build found: NO 00:02:11.746 Configuring rte_build_config.h using configuration 00:02:11.746 Message: 00:02:11.746 ================= 00:02:11.746 Applications Enabled 00:02:11.746 ================= 00:02:11.746 00:02:11.746 apps: 00:02:11.746 00:02:11.746 00:02:11.746 Message: 00:02:11.746 ================= 00:02:11.746 Libraries Enabled 00:02:11.746 ================= 00:02:11.746 00:02:11.746 libs: 00:02:11.746 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.746 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.746 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.746 00:02:11.746 Message: 00:02:11.746 =============== 00:02:11.746 Drivers Enabled 00:02:11.746 =============== 00:02:11.746 00:02:11.746 common: 00:02:11.746 00:02:11.746 bus: 00:02:11.746 pci, vdev, 00:02:11.746 mempool: 00:02:11.746 ring, 00:02:11.746 dma: 00:02:11.746 00:02:11.746 net: 00:02:11.746 00:02:11.746 crypto: 00:02:11.746 00:02:11.746 compress: 00:02:11.746 00:02:11.746 vdpa: 00:02:11.746 00:02:11.746 00:02:11.746 Message: 00:02:11.746 ================= 00:02:11.746 Content Skipped 00:02:11.746 ================= 00:02:11.746 00:02:11.746 apps: 00:02:11.746 dumpcap: explicitly disabled via build config 00:02:11.746 graph: explicitly disabled via build config 00:02:11.746 pdump: explicitly disabled via build config 00:02:11.746 proc-info: explicitly disabled via build config 00:02:11.746 test-acl: explicitly disabled via build config 00:02:11.746 test-bbdev: explicitly disabled via build config 00:02:11.746 test-cmdline: explicitly disabled via build config 00:02:11.746 test-compress-perf: explicitly disabled via build config 00:02:11.746 test-crypto-perf: explicitly disabled via build config 00:02:11.746 test-dma-perf: explicitly disabled via build config 00:02:11.746 test-eventdev: explicitly disabled via build config 00:02:11.746 test-fib: explicitly disabled via build config 00:02:11.746 test-flow-perf: explicitly disabled via build config 00:02:11.746 test-gpudev: explicitly disabled via build config 00:02:11.746 test-mldev: explicitly disabled via build config 00:02:11.746 test-pipeline: explicitly disabled via build config 00:02:11.746 test-pmd: explicitly disabled via build config 00:02:11.746 test-regex: explicitly disabled via build config 00:02:11.746 test-sad: explicitly disabled via build config 00:02:11.746 test-security-perf: explicitly disabled via build config 00:02:11.746 00:02:11.746 libs: 00:02:11.746 argparse: explicitly disabled via build config 00:02:11.746 metrics: explicitly disabled via build config 00:02:11.746 acl: explicitly disabled via build config 00:02:11.746 bbdev: explicitly disabled via build config 00:02:11.746 bitratestats: explicitly disabled via build config 00:02:11.746 bpf: explicitly disabled via build config 00:02:11.746 cfgfile: explicitly disabled via build config 00:02:11.746 distributor: explicitly disabled via build config 00:02:11.746 efd: explicitly disabled via build config 00:02:11.746 eventdev: explicitly disabled via build config 00:02:11.746 dispatcher: explicitly disabled via build config 00:02:11.746 gpudev: explicitly disabled via build config 00:02:11.746 gro: explicitly disabled via build config 00:02:11.746 gso: explicitly disabled via build config 00:02:11.746 ip_frag: explicitly disabled via build config 00:02:11.746 jobstats: explicitly disabled via build config 00:02:11.746 latencystats: explicitly disabled via build config 00:02:11.746 lpm: explicitly disabled via build config 00:02:11.746 member: explicitly disabled via build config 00:02:11.746 pcapng: explicitly disabled via build config 00:02:11.746 rawdev: explicitly disabled via build config 00:02:11.746 regexdev: explicitly disabled via build config 00:02:11.746 mldev: explicitly disabled via build config 00:02:11.746 rib: explicitly disabled via build config 00:02:11.746 sched: explicitly disabled via build config 00:02:11.746 stack: explicitly disabled via build config 00:02:11.746 ipsec: explicitly disabled via build config 00:02:11.746 pdcp: explicitly disabled via build config 00:02:11.746 fib: explicitly disabled via build config 00:02:11.746 port: explicitly disabled via build config 00:02:11.746 pdump: explicitly disabled via build config 00:02:11.746 table: explicitly disabled via build config 00:02:11.746 pipeline: explicitly disabled via build config 00:02:11.746 graph: explicitly disabled via build config 00:02:11.746 node: explicitly disabled via build config 00:02:11.746 00:02:11.746 drivers: 00:02:11.746 common/cpt: not in enabled drivers build config 00:02:11.746 common/dpaax: not in enabled drivers build config 00:02:11.746 common/iavf: not in enabled drivers build config 00:02:11.746 common/idpf: not in enabled drivers build config 00:02:11.746 common/ionic: not in enabled drivers build config 00:02:11.746 common/mvep: not in enabled drivers build config 00:02:11.746 common/octeontx: not in enabled drivers build config 00:02:11.746 bus/auxiliary: not in enabled drivers build config 00:02:11.746 bus/cdx: not in enabled drivers build config 00:02:11.746 bus/dpaa: not in enabled drivers build config 00:02:11.746 bus/fslmc: not in enabled drivers build config 00:02:11.746 bus/ifpga: not in enabled drivers build config 00:02:11.746 bus/platform: not in enabled drivers build config 00:02:11.746 bus/uacce: not in enabled drivers build config 00:02:11.746 bus/vmbus: not in enabled drivers build config 00:02:11.746 common/cnxk: not in enabled drivers build config 00:02:11.747 common/mlx5: not in enabled drivers build config 00:02:11.747 common/nfp: not in enabled drivers build config 00:02:11.747 common/nitrox: not in enabled drivers build config 00:02:11.747 common/qat: not in enabled drivers build config 00:02:11.747 common/sfc_efx: not in enabled drivers build config 00:02:11.747 mempool/bucket: not in enabled drivers build config 00:02:11.747 mempool/cnxk: not in enabled drivers build config 00:02:11.747 mempool/dpaa: not in enabled drivers build config 00:02:11.747 mempool/dpaa2: not in enabled drivers build config 00:02:11.747 mempool/octeontx: not in enabled drivers build config 00:02:11.747 mempool/stack: not in enabled drivers build config 00:02:11.747 dma/cnxk: not in enabled drivers build config 00:02:11.747 dma/dpaa: not in enabled drivers build config 00:02:11.747 dma/dpaa2: not in enabled drivers build config 00:02:11.747 dma/hisilicon: not in enabled drivers build config 00:02:11.747 dma/idxd: not in enabled drivers build config 00:02:11.747 dma/ioat: not in enabled drivers build config 00:02:11.747 dma/skeleton: not in enabled drivers build config 00:02:11.747 net/af_packet: not in enabled drivers build config 00:02:11.747 net/af_xdp: not in enabled drivers build config 00:02:11.747 net/ark: not in enabled drivers build config 00:02:11.747 net/atlantic: not in enabled drivers build config 00:02:11.747 net/avp: not in enabled drivers build config 00:02:11.747 net/axgbe: not in enabled drivers build config 00:02:11.747 net/bnx2x: not in enabled drivers build config 00:02:11.747 net/bnxt: not in enabled drivers build config 00:02:11.747 net/bonding: not in enabled drivers build config 00:02:11.747 net/cnxk: not in enabled drivers build config 00:02:11.747 net/cpfl: not in enabled drivers build config 00:02:11.747 net/cxgbe: not in enabled drivers build config 00:02:11.747 net/dpaa: not in enabled drivers build config 00:02:11.747 net/dpaa2: not in enabled drivers build config 00:02:11.747 net/e1000: not in enabled drivers build config 00:02:11.747 net/ena: not in enabled drivers build config 00:02:11.747 net/enetc: not in enabled drivers build config 00:02:11.747 net/enetfec: not in enabled drivers build config 00:02:11.747 net/enic: not in enabled drivers build config 00:02:11.747 net/failsafe: not in enabled drivers build config 00:02:11.747 net/fm10k: not in enabled drivers build config 00:02:11.747 net/gve: not in enabled drivers build config 00:02:11.747 net/hinic: not in enabled drivers build config 00:02:11.747 net/hns3: not in enabled drivers build config 00:02:11.747 net/i40e: not in enabled drivers build config 00:02:11.747 net/iavf: not in enabled drivers build config 00:02:11.747 net/ice: not in enabled drivers build config 00:02:11.747 net/idpf: not in enabled drivers build config 00:02:11.747 net/igc: not in enabled drivers build config 00:02:11.747 net/ionic: not in enabled drivers build config 00:02:11.747 net/ipn3ke: not in enabled drivers build config 00:02:11.747 net/ixgbe: not in enabled drivers build config 00:02:11.747 net/mana: not in enabled drivers build config 00:02:11.747 net/memif: not in enabled drivers build config 00:02:11.747 net/mlx4: not in enabled drivers build config 00:02:11.747 net/mlx5: not in enabled drivers build config 00:02:11.747 net/mvneta: not in enabled drivers build config 00:02:11.747 net/mvpp2: not in enabled drivers build config 00:02:11.747 net/netvsc: not in enabled drivers build config 00:02:11.747 net/nfb: not in enabled drivers build config 00:02:11.747 net/nfp: not in enabled drivers build config 00:02:11.747 net/ngbe: not in enabled drivers build config 00:02:11.747 net/null: not in enabled drivers build config 00:02:11.747 net/octeontx: not in enabled drivers build config 00:02:11.747 net/octeon_ep: not in enabled drivers build config 00:02:11.747 net/pcap: not in enabled drivers build config 00:02:11.747 net/pfe: not in enabled drivers build config 00:02:11.747 net/qede: not in enabled drivers build config 00:02:11.747 net/ring: not in enabled drivers build config 00:02:11.747 net/sfc: not in enabled drivers build config 00:02:11.747 net/softnic: not in enabled drivers build config 00:02:11.747 net/tap: not in enabled drivers build config 00:02:11.747 net/thunderx: not in enabled drivers build config 00:02:11.747 net/txgbe: not in enabled drivers build config 00:02:11.747 net/vdev_netvsc: not in enabled drivers build config 00:02:11.747 net/vhost: not in enabled drivers build config 00:02:11.747 net/virtio: not in enabled drivers build config 00:02:11.747 net/vmxnet3: not in enabled drivers build config 00:02:11.747 raw/*: missing internal dependency, "rawdev" 00:02:11.747 crypto/armv8: not in enabled drivers build config 00:02:11.747 crypto/bcmfs: not in enabled drivers build config 00:02:11.747 crypto/caam_jr: not in enabled drivers build config 00:02:11.747 crypto/ccp: not in enabled drivers build config 00:02:11.747 crypto/cnxk: not in enabled drivers build config 00:02:11.747 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.747 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.747 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.747 crypto/mlx5: not in enabled drivers build config 00:02:11.747 crypto/mvsam: not in enabled drivers build config 00:02:11.747 crypto/nitrox: not in enabled drivers build config 00:02:11.747 crypto/null: not in enabled drivers build config 00:02:11.747 crypto/octeontx: not in enabled drivers build config 00:02:11.747 crypto/openssl: not in enabled drivers build config 00:02:11.747 crypto/scheduler: not in enabled drivers build config 00:02:11.747 crypto/uadk: not in enabled drivers build config 00:02:11.747 crypto/virtio: not in enabled drivers build config 00:02:11.747 compress/isal: not in enabled drivers build config 00:02:11.747 compress/mlx5: not in enabled drivers build config 00:02:11.747 compress/nitrox: not in enabled drivers build config 00:02:11.747 compress/octeontx: not in enabled drivers build config 00:02:11.747 compress/zlib: not in enabled drivers build config 00:02:11.747 regex/*: missing internal dependency, "regexdev" 00:02:11.747 ml/*: missing internal dependency, "mldev" 00:02:11.747 vdpa/ifc: not in enabled drivers build config 00:02:11.747 vdpa/mlx5: not in enabled drivers build config 00:02:11.747 vdpa/nfp: not in enabled drivers build config 00:02:11.747 vdpa/sfc: not in enabled drivers build config 00:02:11.747 event/*: missing internal dependency, "eventdev" 00:02:11.747 baseband/*: missing internal dependency, "bbdev" 00:02:11.747 gpu/*: missing internal dependency, "gpudev" 00:02:11.747 00:02:11.747 00:02:11.747 Build targets in project: 84 00:02:11.747 00:02:11.747 DPDK 24.03.0 00:02:11.747 00:02:11.747 User defined options 00:02:11.747 buildtype : debug 00:02:11.747 default_library : shared 00:02:11.747 libdir : lib 00:02:11.747 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:11.747 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:11.747 c_link_args : 00:02:11.747 cpu_instruction_set: native 00:02:11.747 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:11.747 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:11.747 enable_docs : false 00:02:11.747 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:11.747 enable_kmods : false 00:02:11.747 max_lcores : 128 00:02:11.747 tests : false 00:02:11.747 00:02:11.747 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.007 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:12.269 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.269 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.269 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.269 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.269 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.269 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.269 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.269 [8/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.269 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.269 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.269 [11/267] Linking static target lib/librte_kvargs.a 00:02:12.270 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.270 [13/267] Linking static target lib/librte_log.a 00:02:12.270 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.270 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.270 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.270 [17/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.270 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.270 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:12.270 [20/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.270 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:12.270 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:12.270 [23/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.270 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.270 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:12.528 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:12.528 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:12.528 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:12.528 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:12.528 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:12.528 [31/267] Linking static target lib/librte_pci.a 00:02:12.528 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:12.528 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.528 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:12.528 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:12.528 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.528 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:12.528 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.528 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.528 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.528 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.787 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:12.787 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.787 [44/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.787 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.787 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.787 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.787 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.787 [49/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.787 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.787 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.787 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.787 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:12.787 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.787 [55/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.787 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.787 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.787 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.787 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.787 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:12.787 [61/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.787 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.787 [63/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.787 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:12.787 [65/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:12.787 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:12.787 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.787 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:12.787 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.787 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.787 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:12.787 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.787 [73/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.787 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:12.787 [75/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.787 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.787 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:12.787 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.788 [79/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.788 [80/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:12.788 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.788 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.788 [83/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.788 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.788 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.788 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.788 [87/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.788 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.788 [89/267] Linking static target lib/librte_ring.a 00:02:12.788 [90/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.788 [91/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.788 [92/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.788 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.788 [94/267] Linking static target lib/librte_telemetry.a 00:02:12.788 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.788 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:12.788 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:12.788 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.788 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.788 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.788 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:12.788 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.788 [103/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.788 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:12.788 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:12.788 [106/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.788 [107/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:12.788 [108/267] Linking static target lib/librte_meter.a 00:02:12.788 [109/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.788 [110/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:12.788 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.788 [112/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.788 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:12.788 [114/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.788 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.788 [116/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.788 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:12.788 [118/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:12.788 [119/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.788 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:12.788 [121/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.788 [122/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:12.788 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:12.788 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:12.788 [125/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:12.788 [126/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:12.788 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.788 [128/267] Linking static target lib/librte_timer.a 00:02:12.788 [129/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.788 [130/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.788 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:12.788 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.788 [133/267] Linking static target lib/librte_rcu.a 00:02:12.788 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.788 [135/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.788 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:12.788 [137/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.788 [138/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.788 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.048 [140/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.048 [141/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.048 [142/267] Linking static target lib/librte_cmdline.a 00:02:13.048 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.048 [144/267] Linking static target lib/librte_reorder.a 00:02:13.048 [145/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.048 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.048 [147/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.048 [148/267] Linking static target lib/librte_mbuf.a 00:02:13.048 [149/267] Linking target lib/librte_log.so.24.1 00:02:13.048 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.048 [151/267] Linking static target lib/librte_power.a 00:02:13.048 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.048 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.048 [154/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.048 [155/267] Linking static target lib/librte_mempool.a 00:02:13.048 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.048 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.048 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.048 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.048 [160/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.048 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.048 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.048 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.048 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.048 [165/267] Linking static target lib/librte_security.a 00:02:13.048 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.048 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.048 [168/267] Linking static target lib/librte_compressdev.a 00:02:13.048 [169/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.048 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.048 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.048 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.048 [173/267] Linking static target lib/librte_dmadev.a 00:02:13.048 [174/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.048 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.048 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.048 [177/267] Linking static target lib/librte_eal.a 00:02:13.048 [178/267] Linking static target lib/librte_net.a 00:02:13.048 [179/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.048 [180/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.048 [181/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.048 [182/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.048 [183/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.048 [184/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.048 [185/267] Linking target lib/librte_kvargs.so.24.1 00:02:13.048 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.048 [187/267] Linking static target drivers/librte_bus_vdev.a 00:02:13.048 [188/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.049 [189/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:13.049 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.049 [191/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.049 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.049 [193/267] Linking static target lib/librte_hash.a 00:02:13.310 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.310 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.310 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.310 [197/267] Linking static target drivers/librte_bus_pci.a 00:02:13.310 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.310 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:13.310 [200/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.310 [201/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.310 [202/267] Linking static target lib/librte_cryptodev.a 00:02:13.310 [203/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.310 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.310 [205/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.310 [206/267] Linking static target drivers/librte_mempool_ring.a 00:02:13.310 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:13.310 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.310 [209/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.310 [210/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.310 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.570 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:13.570 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.570 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:13.570 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.830 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.830 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.830 [218/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.830 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:13.830 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:13.830 [221/267] Linking static target lib/librte_ethdev.a 00:02:13.830 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.830 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.090 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.090 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.350 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.350 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:14.350 [228/267] Linking static target lib/librte_vhost.a 00:02:15.737 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.680 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.274 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.217 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.479 [233/267] Linking target lib/librte_eal.so.24.1 00:02:24.479 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.740 [235/267] Linking target lib/librte_ring.so.24.1 00:02:24.740 [236/267] Linking target lib/librte_meter.so.24.1 00:02:24.740 [237/267] Linking target lib/librte_timer.so.24.1 00:02:24.740 [238/267] Linking target lib/librte_pci.so.24.1 00:02:24.740 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:24.740 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.740 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.740 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.740 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.740 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.740 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.740 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:24.740 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:24.740 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.000 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.000 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.000 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.000 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:25.000 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.261 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:02:25.261 [255/267] Linking target lib/librte_net.so.24.1 00:02:25.261 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:25.261 [257/267] Linking target lib/librte_compressdev.so.24.1 00:02:25.261 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.261 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.261 [260/267] Linking target lib/librte_hash.so.24.1 00:02:25.261 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:25.261 [262/267] Linking target lib/librte_security.so.24.1 00:02:25.261 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:25.521 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.521 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.521 [266/267] Linking target lib/librte_power.so.24.1 00:02:25.521 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:25.521 INFO: autodetecting backend as ninja 00:02:25.521 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:26.905 CC lib/ut/ut.o 00:02:26.905 CC lib/log/log.o 00:02:26.905 CC lib/ut_mock/mock.o 00:02:26.905 CC lib/log/log_flags.o 00:02:26.905 CC lib/log/log_deprecated.o 00:02:26.905 LIB libspdk_ut.a 00:02:26.905 SO libspdk_ut.so.2.0 00:02:26.905 LIB libspdk_log.a 00:02:26.905 LIB libspdk_ut_mock.a 00:02:26.905 SO libspdk_log.so.7.0 00:02:26.905 SO libspdk_ut_mock.so.6.0 00:02:26.905 SYMLINK libspdk_ut.so 00:02:26.905 SYMLINK libspdk_ut_mock.so 00:02:26.905 SYMLINK libspdk_log.so 00:02:27.166 CC lib/dma/dma.o 00:02:27.166 CC lib/ioat/ioat.o 00:02:27.425 CC lib/util/base64.o 00:02:27.425 CC lib/util/bit_array.o 00:02:27.425 CC lib/util/cpuset.o 00:02:27.425 CC lib/util/crc16.o 00:02:27.425 CC lib/util/crc32.o 00:02:27.425 CC lib/util/crc32c.o 00:02:27.425 CC lib/util/dif.o 00:02:27.425 CC lib/util/crc32_ieee.o 00:02:27.425 CC lib/util/crc64.o 00:02:27.425 CXX lib/trace_parser/trace.o 00:02:27.425 CC lib/util/fd.o 00:02:27.425 CC lib/util/iov.o 00:02:27.425 CC lib/util/file.o 00:02:27.425 CC lib/util/hexlify.o 00:02:27.425 CC lib/util/math.o 00:02:27.425 CC lib/util/pipe.o 00:02:27.425 CC lib/util/strerror_tls.o 00:02:27.425 CC lib/util/string.o 00:02:27.425 CC lib/util/uuid.o 00:02:27.425 CC lib/util/fd_group.o 00:02:27.425 CC lib/util/xor.o 00:02:27.425 CC lib/util/zipf.o 00:02:27.425 CC lib/vfio_user/host/vfio_user.o 00:02:27.425 CC lib/vfio_user/host/vfio_user_pci.o 00:02:27.425 LIB libspdk_dma.a 00:02:27.425 SO libspdk_dma.so.4.0 00:02:27.425 LIB libspdk_ioat.a 00:02:27.685 SYMLINK libspdk_dma.so 00:02:27.685 SO libspdk_ioat.so.7.0 00:02:27.685 SYMLINK libspdk_ioat.so 00:02:27.685 LIB libspdk_vfio_user.a 00:02:27.685 SO libspdk_vfio_user.so.5.0 00:02:27.685 LIB libspdk_util.a 00:02:27.944 SYMLINK libspdk_vfio_user.so 00:02:27.944 SO libspdk_util.so.9.1 00:02:27.944 SYMLINK libspdk_util.so 00:02:28.203 LIB libspdk_trace_parser.a 00:02:28.203 SO libspdk_trace_parser.so.5.0 00:02:28.203 SYMLINK libspdk_trace_parser.so 00:02:28.461 CC lib/json/json_parse.o 00:02:28.461 CC lib/json/json_util.o 00:02:28.461 CC lib/json/json_write.o 00:02:28.461 CC lib/env_dpdk/env.o 00:02:28.461 CC lib/env_dpdk/init.o 00:02:28.461 CC lib/rdma_provider/common.o 00:02:28.461 CC lib/env_dpdk/memory.o 00:02:28.461 CC lib/env_dpdk/pci.o 00:02:28.461 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:28.461 CC lib/env_dpdk/pci_ioat.o 00:02:28.461 CC lib/env_dpdk/threads.o 00:02:28.461 CC lib/env_dpdk/pci_virtio.o 00:02:28.461 CC lib/env_dpdk/pci_vmd.o 00:02:28.461 CC lib/env_dpdk/pci_idxd.o 00:02:28.461 CC lib/env_dpdk/pci_event.o 00:02:28.461 CC lib/env_dpdk/sigbus_handler.o 00:02:28.461 CC lib/idxd/idxd.o 00:02:28.461 CC lib/vmd/vmd.o 00:02:28.461 CC lib/env_dpdk/pci_dpdk.o 00:02:28.461 CC lib/idxd/idxd_user.o 00:02:28.461 CC lib/vmd/led.o 00:02:28.461 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:28.461 CC lib/conf/conf.o 00:02:28.461 CC lib/idxd/idxd_kernel.o 00:02:28.461 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:28.461 CC lib/rdma_utils/rdma_utils.o 00:02:28.461 LIB libspdk_rdma_provider.a 00:02:28.721 LIB libspdk_conf.a 00:02:28.721 SO libspdk_rdma_provider.so.6.0 00:02:28.721 LIB libspdk_json.a 00:02:28.721 SO libspdk_conf.so.6.0 00:02:28.721 LIB libspdk_rdma_utils.a 00:02:28.721 SO libspdk_json.so.6.0 00:02:28.721 SYMLINK libspdk_rdma_provider.so 00:02:28.721 SO libspdk_rdma_utils.so.1.0 00:02:28.721 SYMLINK libspdk_conf.so 00:02:28.721 SYMLINK libspdk_json.so 00:02:28.721 SYMLINK libspdk_rdma_utils.so 00:02:28.981 LIB libspdk_idxd.a 00:02:28.981 SO libspdk_idxd.so.12.0 00:02:28.981 LIB libspdk_vmd.a 00:02:28.981 SYMLINK libspdk_idxd.so 00:02:28.981 SO libspdk_vmd.so.6.0 00:02:28.981 CC lib/jsonrpc/jsonrpc_server.o 00:02:28.981 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:28.981 CC lib/jsonrpc/jsonrpc_client.o 00:02:28.981 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:28.981 SYMLINK libspdk_vmd.so 00:02:29.241 LIB libspdk_jsonrpc.a 00:02:29.241 SO libspdk_jsonrpc.so.6.0 00:02:29.501 SYMLINK libspdk_jsonrpc.so 00:02:29.501 LIB libspdk_env_dpdk.a 00:02:29.501 SO libspdk_env_dpdk.so.14.1 00:02:29.761 SYMLINK libspdk_env_dpdk.so 00:02:29.761 CC lib/rpc/rpc.o 00:02:30.021 LIB libspdk_rpc.a 00:02:30.021 SO libspdk_rpc.so.6.0 00:02:30.021 SYMLINK libspdk_rpc.so 00:02:30.590 CC lib/notify/notify.o 00:02:30.590 CC lib/notify/notify_rpc.o 00:02:30.590 CC lib/trace/trace.o 00:02:30.590 CC lib/trace/trace_flags.o 00:02:30.590 CC lib/trace/trace_rpc.o 00:02:30.590 CC lib/keyring/keyring.o 00:02:30.590 CC lib/keyring/keyring_rpc.o 00:02:30.590 LIB libspdk_notify.a 00:02:30.590 SO libspdk_notify.so.6.0 00:02:30.590 LIB libspdk_trace.a 00:02:30.590 SO libspdk_trace.so.10.0 00:02:30.590 LIB libspdk_keyring.a 00:02:30.590 SYMLINK libspdk_notify.so 00:02:30.850 SO libspdk_keyring.so.1.0 00:02:30.850 SYMLINK libspdk_trace.so 00:02:30.850 SYMLINK libspdk_keyring.so 00:02:31.109 CC lib/thread/thread.o 00:02:31.109 CC lib/thread/iobuf.o 00:02:31.109 CC lib/sock/sock.o 00:02:31.109 CC lib/sock/sock_rpc.o 00:02:31.369 LIB libspdk_sock.a 00:02:31.630 SO libspdk_sock.so.10.0 00:02:31.630 SYMLINK libspdk_sock.so 00:02:31.890 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:31.890 CC lib/nvme/nvme_ctrlr.o 00:02:31.890 CC lib/nvme/nvme_fabric.o 00:02:31.890 CC lib/nvme/nvme_ns_cmd.o 00:02:31.890 CC lib/nvme/nvme_ns.o 00:02:31.890 CC lib/nvme/nvme_pcie_common.o 00:02:31.890 CC lib/nvme/nvme_pcie.o 00:02:31.890 CC lib/nvme/nvme_qpair.o 00:02:31.890 CC lib/nvme/nvme.o 00:02:31.890 CC lib/nvme/nvme_quirks.o 00:02:31.890 CC lib/nvme/nvme_transport.o 00:02:31.890 CC lib/nvme/nvme_discovery.o 00:02:31.890 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:31.890 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:31.890 CC lib/nvme/nvme_tcp.o 00:02:31.890 CC lib/nvme/nvme_opal.o 00:02:31.890 CC lib/nvme/nvme_io_msg.o 00:02:31.890 CC lib/nvme/nvme_poll_group.o 00:02:31.890 CC lib/nvme/nvme_zns.o 00:02:31.890 CC lib/nvme/nvme_stubs.o 00:02:31.890 CC lib/nvme/nvme_auth.o 00:02:31.890 CC lib/nvme/nvme_cuse.o 00:02:31.890 CC lib/nvme/nvme_rdma.o 00:02:32.457 LIB libspdk_thread.a 00:02:32.457 SO libspdk_thread.so.10.1 00:02:32.457 SYMLINK libspdk_thread.so 00:02:32.717 CC lib/accel/accel.o 00:02:32.717 CC lib/blob/blobstore.o 00:02:32.717 CC lib/blob/request.o 00:02:32.717 CC lib/accel/accel_rpc.o 00:02:32.717 CC lib/blob/zeroes.o 00:02:32.717 CC lib/blob/blob_bs_dev.o 00:02:32.717 CC lib/accel/accel_sw.o 00:02:32.717 CC lib/init/subsystem_rpc.o 00:02:32.717 CC lib/init/json_config.o 00:02:32.717 CC lib/init/subsystem.o 00:02:32.717 CC lib/init/rpc.o 00:02:32.717 CC lib/virtio/virtio.o 00:02:32.717 CC lib/virtio/virtio_vhost_user.o 00:02:32.717 CC lib/virtio/virtio_vfio_user.o 00:02:32.717 CC lib/virtio/virtio_pci.o 00:02:32.978 LIB libspdk_init.a 00:02:32.978 SO libspdk_init.so.5.0 00:02:32.978 LIB libspdk_virtio.a 00:02:32.978 SYMLINK libspdk_init.so 00:02:32.978 SO libspdk_virtio.so.7.0 00:02:33.239 SYMLINK libspdk_virtio.so 00:02:33.503 CC lib/event/app.o 00:02:33.503 CC lib/event/reactor.o 00:02:33.503 CC lib/event/log_rpc.o 00:02:33.503 CC lib/event/app_rpc.o 00:02:33.503 CC lib/event/scheduler_static.o 00:02:33.503 LIB libspdk_accel.a 00:02:33.503 SO libspdk_accel.so.15.1 00:02:33.841 SYMLINK libspdk_accel.so 00:02:33.841 LIB libspdk_nvme.a 00:02:33.841 LIB libspdk_event.a 00:02:33.841 SO libspdk_nvme.so.13.1 00:02:33.841 SO libspdk_event.so.14.0 00:02:33.841 SYMLINK libspdk_event.so 00:02:34.126 CC lib/bdev/bdev.o 00:02:34.126 CC lib/bdev/bdev_rpc.o 00:02:34.126 CC lib/bdev/bdev_zone.o 00:02:34.126 CC lib/bdev/part.o 00:02:34.126 CC lib/bdev/scsi_nvme.o 00:02:34.126 SYMLINK libspdk_nvme.so 00:02:35.514 LIB libspdk_blob.a 00:02:35.514 SO libspdk_blob.so.11.0 00:02:35.514 SYMLINK libspdk_blob.so 00:02:35.776 CC lib/lvol/lvol.o 00:02:35.776 CC lib/blobfs/blobfs.o 00:02:35.776 CC lib/blobfs/tree.o 00:02:36.348 LIB libspdk_bdev.a 00:02:36.348 SO libspdk_bdev.so.15.1 00:02:36.348 SYMLINK libspdk_bdev.so 00:02:36.348 LIB libspdk_blobfs.a 00:02:36.348 SO libspdk_blobfs.so.10.0 00:02:36.608 LIB libspdk_lvol.a 00:02:36.608 SO libspdk_lvol.so.10.0 00:02:36.608 SYMLINK libspdk_blobfs.so 00:02:36.608 SYMLINK libspdk_lvol.so 00:02:36.609 CC lib/ublk/ublk.o 00:02:36.609 CC lib/ublk/ublk_rpc.o 00:02:36.868 CC lib/nvmf/ctrlr.o 00:02:36.868 CC lib/ftl/ftl_core.o 00:02:36.868 CC lib/nvmf/ctrlr_discovery.o 00:02:36.868 CC lib/nvmf/ctrlr_bdev.o 00:02:36.868 CC lib/ftl/ftl_init.o 00:02:36.868 CC lib/nvmf/subsystem.o 00:02:36.868 CC lib/ftl/ftl_layout.o 00:02:36.868 CC lib/nvmf/nvmf.o 00:02:36.868 CC lib/ftl/ftl_debug.o 00:02:36.868 CC lib/nvmf/nvmf_rpc.o 00:02:36.868 CC lib/ftl/ftl_io.o 00:02:36.868 CC lib/nvmf/transport.o 00:02:36.868 CC lib/nvmf/tcp.o 00:02:36.868 CC lib/ftl/ftl_sb.o 00:02:36.868 CC lib/nvmf/stubs.o 00:02:36.868 CC lib/ftl/ftl_l2p.o 00:02:36.868 CC lib/nvmf/mdns_server.o 00:02:36.868 CC lib/ftl/ftl_l2p_flat.o 00:02:36.868 CC lib/nvmf/rdma.o 00:02:36.868 CC lib/nvmf/auth.o 00:02:36.868 CC lib/ftl/ftl_nv_cache.o 00:02:36.868 CC lib/ftl/ftl_band.o 00:02:36.868 CC lib/ftl/ftl_band_ops.o 00:02:36.868 CC lib/ftl/ftl_writer.o 00:02:36.868 CC lib/scsi/dev.o 00:02:36.868 CC lib/ftl/ftl_rq.o 00:02:36.868 CC lib/scsi/lun.o 00:02:36.868 CC lib/ftl/ftl_reloc.o 00:02:36.868 CC lib/scsi/port.o 00:02:36.868 CC lib/ftl/ftl_l2p_cache.o 00:02:36.868 CC lib/scsi/scsi.o 00:02:36.868 CC lib/scsi/scsi_bdev.o 00:02:36.868 CC lib/ftl/ftl_p2l.o 00:02:36.868 CC lib/nbd/nbd.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.868 CC lib/nbd/nbd_rpc.o 00:02:36.868 CC lib/scsi/scsi_pr.o 00:02:36.868 CC lib/scsi/scsi_rpc.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.868 CC lib/scsi/task.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.868 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.868 CC lib/ftl/utils/ftl_conf.o 00:02:36.868 CC lib/ftl/utils/ftl_md.o 00:02:36.868 CC lib/ftl/utils/ftl_mempool.o 00:02:36.868 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.868 CC lib/ftl/utils/ftl_property.o 00:02:36.868 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.868 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.868 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.868 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:36.868 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:36.868 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:36.868 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:36.868 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:36.868 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:36.868 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:36.868 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:36.868 CC lib/ftl/base/ftl_base_dev.o 00:02:36.868 CC lib/ftl/ftl_trace.o 00:02:36.868 CC lib/ftl/base/ftl_base_bdev.o 00:02:37.437 LIB libspdk_nbd.a 00:02:37.437 LIB libspdk_scsi.a 00:02:37.437 SO libspdk_nbd.so.7.0 00:02:37.437 SO libspdk_scsi.so.9.0 00:02:37.437 SYMLINK libspdk_nbd.so 00:02:37.437 LIB libspdk_ublk.a 00:02:37.437 SO libspdk_ublk.so.3.0 00:02:37.437 SYMLINK libspdk_scsi.so 00:02:37.437 SYMLINK libspdk_ublk.so 00:02:37.697 LIB libspdk_ftl.a 00:02:37.697 CC lib/vhost/vhost.o 00:02:37.697 CC lib/vhost/vhost_rpc.o 00:02:37.697 CC lib/vhost/vhost_scsi.o 00:02:37.697 CC lib/vhost/vhost_blk.o 00:02:37.697 CC lib/vhost/rte_vhost_user.o 00:02:37.697 CC lib/iscsi/conn.o 00:02:37.697 CC lib/iscsi/init_grp.o 00:02:37.697 CC lib/iscsi/iscsi.o 00:02:37.697 CC lib/iscsi/md5.o 00:02:37.697 CC lib/iscsi/param.o 00:02:37.697 CC lib/iscsi/portal_grp.o 00:02:37.697 CC lib/iscsi/tgt_node.o 00:02:37.697 CC lib/iscsi/iscsi_subsystem.o 00:02:37.697 CC lib/iscsi/task.o 00:02:37.697 CC lib/iscsi/iscsi_rpc.o 00:02:37.958 SO libspdk_ftl.so.9.0 00:02:38.219 SYMLINK libspdk_ftl.so 00:02:38.480 LIB libspdk_nvmf.a 00:02:38.480 SO libspdk_nvmf.so.18.1 00:02:38.741 LIB libspdk_vhost.a 00:02:38.741 SYMLINK libspdk_nvmf.so 00:02:38.741 SO libspdk_vhost.so.8.0 00:02:38.741 SYMLINK libspdk_vhost.so 00:02:39.002 LIB libspdk_iscsi.a 00:02:39.002 SO libspdk_iscsi.so.8.0 00:02:39.261 SYMLINK libspdk_iscsi.so 00:02:39.833 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.833 CC module/keyring/file/keyring.o 00:02:39.833 LIB libspdk_env_dpdk_rpc.a 00:02:39.833 CC module/keyring/file/keyring_rpc.o 00:02:39.833 CC module/keyring/linux/keyring_rpc.o 00:02:39.833 CC module/keyring/linux/keyring.o 00:02:39.833 CC module/accel/ioat/accel_ioat.o 00:02:39.833 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.833 CC module/sock/posix/posix.o 00:02:39.833 CC module/accel/iaa/accel_iaa.o 00:02:39.833 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.833 CC module/blob/bdev/blob_bdev.o 00:02:39.833 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.833 CC module/accel/dsa/accel_dsa.o 00:02:39.833 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.833 CC module/accel/error/accel_error.o 00:02:39.833 CC module/accel/error/accel_error_rpc.o 00:02:39.833 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.833 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:39.833 SO libspdk_env_dpdk_rpc.so.6.0 00:02:39.833 SYMLINK libspdk_env_dpdk_rpc.so 00:02:40.094 LIB libspdk_keyring_linux.a 00:02:40.094 LIB libspdk_keyring_file.a 00:02:40.094 LIB libspdk_accel_ioat.a 00:02:40.094 LIB libspdk_scheduler_gscheduler.a 00:02:40.094 SO libspdk_keyring_file.so.1.0 00:02:40.094 SO libspdk_keyring_linux.so.1.0 00:02:40.094 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.094 LIB libspdk_accel_iaa.a 00:02:40.094 LIB libspdk_accel_error.a 00:02:40.094 LIB libspdk_scheduler_dynamic.a 00:02:40.094 SO libspdk_accel_ioat.so.6.0 00:02:40.094 SO libspdk_scheduler_gscheduler.so.4.0 00:02:40.094 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:40.094 SO libspdk_accel_iaa.so.3.0 00:02:40.094 SO libspdk_scheduler_dynamic.so.4.0 00:02:40.094 SO libspdk_accel_error.so.2.0 00:02:40.094 SYMLINK libspdk_keyring_file.so 00:02:40.094 LIB libspdk_blob_bdev.a 00:02:40.094 SYMLINK libspdk_keyring_linux.so 00:02:40.094 LIB libspdk_accel_dsa.a 00:02:40.094 SYMLINK libspdk_scheduler_gscheduler.so 00:02:40.094 SO libspdk_blob_bdev.so.11.0 00:02:40.094 SYMLINK libspdk_accel_ioat.so 00:02:40.094 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:40.094 SO libspdk_accel_dsa.so.5.0 00:02:40.094 SYMLINK libspdk_scheduler_dynamic.so 00:02:40.094 SYMLINK libspdk_accel_iaa.so 00:02:40.094 SYMLINK libspdk_accel_error.so 00:02:40.356 SYMLINK libspdk_blob_bdev.so 00:02:40.356 SYMLINK libspdk_accel_dsa.so 00:02:40.617 LIB libspdk_sock_posix.a 00:02:40.617 SO libspdk_sock_posix.so.6.0 00:02:40.617 SYMLINK libspdk_sock_posix.so 00:02:40.876 CC module/bdev/gpt/gpt.o 00:02:40.876 CC module/bdev/gpt/vbdev_gpt.o 00:02:40.876 CC module/bdev/delay/vbdev_delay.o 00:02:40.876 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:40.876 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.876 CC module/bdev/error/vbdev_error.o 00:02:40.876 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.876 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.876 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.876 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.876 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.876 CC module/bdev/raid/bdev_raid.o 00:02:40.876 CC module/bdev/nvme/bdev_nvme.o 00:02:40.876 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.876 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.876 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.876 CC module/bdev/nvme/nvme_rpc.o 00:02:40.876 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.876 CC module/bdev/malloc/bdev_malloc.o 00:02:40.876 CC module/bdev/raid/raid0.o 00:02:40.876 CC module/bdev/nvme/vbdev_opal.o 00:02:40.876 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.876 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.876 CC module/bdev/raid/raid1.o 00:02:40.876 CC module/bdev/null/bdev_null.o 00:02:40.876 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.876 CC module/bdev/ftl/bdev_ftl.o 00:02:40.876 CC module/bdev/raid/concat.o 00:02:40.876 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.876 CC module/bdev/null/bdev_null_rpc.o 00:02:40.876 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.876 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.876 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.876 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.876 CC module/bdev/aio/bdev_aio.o 00:02:40.876 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.876 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.876 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:40.876 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.876 CC module/bdev/split/vbdev_split.o 00:02:40.876 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.876 CC module/bdev/split/vbdev_split_rpc.o 00:02:41.136 LIB libspdk_bdev_null.a 00:02:41.136 LIB libspdk_blobfs_bdev.a 00:02:41.136 LIB libspdk_bdev_error.a 00:02:41.136 LIB libspdk_bdev_gpt.a 00:02:41.136 SO libspdk_bdev_null.so.6.0 00:02:41.136 LIB libspdk_bdev_split.a 00:02:41.136 SO libspdk_blobfs_bdev.so.6.0 00:02:41.136 SO libspdk_bdev_error.so.6.0 00:02:41.136 SO libspdk_bdev_split.so.6.0 00:02:41.136 SO libspdk_bdev_gpt.so.6.0 00:02:41.136 LIB libspdk_bdev_ftl.a 00:02:41.136 SYMLINK libspdk_bdev_null.so 00:02:41.136 LIB libspdk_bdev_passthru.a 00:02:41.136 LIB libspdk_bdev_zone_block.a 00:02:41.136 LIB libspdk_bdev_delay.a 00:02:41.136 SYMLINK libspdk_bdev_error.so 00:02:41.136 SO libspdk_bdev_ftl.so.6.0 00:02:41.136 SO libspdk_bdev_passthru.so.6.0 00:02:41.136 SYMLINK libspdk_blobfs_bdev.so 00:02:41.136 LIB libspdk_bdev_aio.a 00:02:41.136 LIB libspdk_bdev_iscsi.a 00:02:41.136 SO libspdk_bdev_zone_block.so.6.0 00:02:41.136 SYMLINK libspdk_bdev_split.so 00:02:41.136 LIB libspdk_bdev_malloc.a 00:02:41.136 SYMLINK libspdk_bdev_gpt.so 00:02:41.136 SO libspdk_bdev_delay.so.6.0 00:02:41.136 SO libspdk_bdev_iscsi.so.6.0 00:02:41.136 SO libspdk_bdev_aio.so.6.0 00:02:41.136 SO libspdk_bdev_malloc.so.6.0 00:02:41.136 SYMLINK libspdk_bdev_ftl.so 00:02:41.136 SYMLINK libspdk_bdev_passthru.so 00:02:41.136 SYMLINK libspdk_bdev_zone_block.so 00:02:41.136 SYMLINK libspdk_bdev_delay.so 00:02:41.136 SYMLINK libspdk_bdev_iscsi.so 00:02:41.136 LIB libspdk_bdev_lvol.a 00:02:41.397 SYMLINK libspdk_bdev_aio.so 00:02:41.397 SYMLINK libspdk_bdev_malloc.so 00:02:41.397 LIB libspdk_bdev_virtio.a 00:02:41.397 SO libspdk_bdev_lvol.so.6.0 00:02:41.397 SO libspdk_bdev_virtio.so.6.0 00:02:41.397 SYMLINK libspdk_bdev_lvol.so 00:02:41.397 SYMLINK libspdk_bdev_virtio.so 00:02:41.659 LIB libspdk_bdev_raid.a 00:02:41.659 SO libspdk_bdev_raid.so.6.0 00:02:41.921 SYMLINK libspdk_bdev_raid.so 00:02:42.864 LIB libspdk_bdev_nvme.a 00:02:42.864 SO libspdk_bdev_nvme.so.7.0 00:02:42.864 SYMLINK libspdk_bdev_nvme.so 00:02:43.436 CC module/event/subsystems/vmd/vmd.o 00:02:43.436 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.436 CC module/event/subsystems/sock/sock.o 00:02:43.436 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.436 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.436 CC module/event/subsystems/keyring/keyring.o 00:02:43.436 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.436 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.697 LIB libspdk_event_keyring.a 00:02:43.697 LIB libspdk_event_vmd.a 00:02:43.697 LIB libspdk_event_iobuf.a 00:02:43.697 LIB libspdk_event_vhost_blk.a 00:02:43.697 LIB libspdk_event_sock.a 00:02:43.697 LIB libspdk_event_scheduler.a 00:02:43.697 SO libspdk_event_keyring.so.1.0 00:02:43.697 SO libspdk_event_sock.so.5.0 00:02:43.697 SO libspdk_event_vmd.so.6.0 00:02:43.697 SO libspdk_event_iobuf.so.3.0 00:02:43.697 SO libspdk_event_vhost_blk.so.3.0 00:02:43.697 SO libspdk_event_scheduler.so.4.0 00:02:43.697 SYMLINK libspdk_event_keyring.so 00:02:43.697 SYMLINK libspdk_event_iobuf.so 00:02:43.697 SYMLINK libspdk_event_sock.so 00:02:43.697 SYMLINK libspdk_event_vmd.so 00:02:43.697 SYMLINK libspdk_event_scheduler.so 00:02:43.697 SYMLINK libspdk_event_vhost_blk.so 00:02:44.268 CC module/event/subsystems/accel/accel.o 00:02:44.268 LIB libspdk_event_accel.a 00:02:44.268 SO libspdk_event_accel.so.6.0 00:02:44.529 SYMLINK libspdk_event_accel.so 00:02:44.790 CC module/event/subsystems/bdev/bdev.o 00:02:45.051 LIB libspdk_event_bdev.a 00:02:45.051 SO libspdk_event_bdev.so.6.0 00:02:45.051 SYMLINK libspdk_event_bdev.so 00:02:45.312 CC module/event/subsystems/scsi/scsi.o 00:02:45.312 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.312 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.312 CC module/event/subsystems/ublk/ublk.o 00:02:45.312 CC module/event/subsystems/nbd/nbd.o 00:02:45.572 LIB libspdk_event_ublk.a 00:02:45.572 LIB libspdk_event_nbd.a 00:02:45.572 LIB libspdk_event_scsi.a 00:02:45.572 SO libspdk_event_ublk.so.3.0 00:02:45.572 SO libspdk_event_nbd.so.6.0 00:02:45.572 SO libspdk_event_scsi.so.6.0 00:02:45.572 LIB libspdk_event_nvmf.a 00:02:45.572 SYMLINK libspdk_event_nbd.so 00:02:45.572 SYMLINK libspdk_event_ublk.so 00:02:45.572 SO libspdk_event_nvmf.so.6.0 00:02:45.572 SYMLINK libspdk_event_scsi.so 00:02:45.832 SYMLINK libspdk_event_nvmf.so 00:02:46.093 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:46.093 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.093 LIB libspdk_event_vhost_scsi.a 00:02:46.093 LIB libspdk_event_iscsi.a 00:02:46.093 SO libspdk_event_vhost_scsi.so.3.0 00:02:46.353 SO libspdk_event_iscsi.so.6.0 00:02:46.353 SYMLINK libspdk_event_vhost_scsi.so 00:02:46.353 SYMLINK libspdk_event_iscsi.so 00:02:46.614 SO libspdk.so.6.0 00:02:46.614 SYMLINK libspdk.so 00:02:46.874 CXX app/trace/trace.o 00:02:46.874 CC app/trace_record/trace_record.o 00:02:46.874 CC app/spdk_top/spdk_top.o 00:02:46.874 CC app/spdk_lspci/spdk_lspci.o 00:02:46.874 CC app/spdk_nvme_perf/perf.o 00:02:46.874 CC app/spdk_nvme_identify/identify.o 00:02:46.874 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.874 CC test/rpc_client/rpc_client_test.o 00:02:46.874 TEST_HEADER include/spdk/accel.h 00:02:46.874 TEST_HEADER include/spdk/accel_module.h 00:02:46.874 TEST_HEADER include/spdk/assert.h 00:02:46.874 TEST_HEADER include/spdk/barrier.h 00:02:46.874 TEST_HEADER include/spdk/base64.h 00:02:46.874 TEST_HEADER include/spdk/bdev.h 00:02:46.874 TEST_HEADER include/spdk/bdev_module.h 00:02:46.874 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.874 TEST_HEADER include/spdk/bit_array.h 00:02:46.874 TEST_HEADER include/spdk/bit_pool.h 00:02:46.874 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.874 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.874 TEST_HEADER include/spdk/blobfs.h 00:02:46.874 TEST_HEADER include/spdk/blob.h 00:02:46.874 TEST_HEADER include/spdk/conf.h 00:02:46.874 TEST_HEADER include/spdk/config.h 00:02:46.874 TEST_HEADER include/spdk/cpuset.h 00:02:46.874 TEST_HEADER include/spdk/crc16.h 00:02:46.874 TEST_HEADER include/spdk/crc32.h 00:02:46.874 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:46.874 TEST_HEADER include/spdk/crc64.h 00:02:46.874 TEST_HEADER include/spdk/dif.h 00:02:46.874 TEST_HEADER include/spdk/dma.h 00:02:46.874 TEST_HEADER include/spdk/endian.h 00:02:46.874 TEST_HEADER include/spdk/env.h 00:02:46.874 CC app/iscsi_tgt/iscsi_tgt.o 00:02:46.874 CC app/nvmf_tgt/nvmf_main.o 00:02:46.874 TEST_HEADER include/spdk/event.h 00:02:46.874 TEST_HEADER include/spdk/env_dpdk.h 00:02:46.874 TEST_HEADER include/spdk/fd_group.h 00:02:46.874 TEST_HEADER include/spdk/fd.h 00:02:46.874 TEST_HEADER include/spdk/file.h 00:02:46.874 CC app/spdk_dd/spdk_dd.o 00:02:46.874 TEST_HEADER include/spdk/ftl.h 00:02:46.874 TEST_HEADER include/spdk/gpt_spec.h 00:02:46.874 TEST_HEADER include/spdk/hexlify.h 00:02:46.874 TEST_HEADER include/spdk/histogram_data.h 00:02:46.874 TEST_HEADER include/spdk/idxd_spec.h 00:02:46.874 TEST_HEADER include/spdk/idxd.h 00:02:46.874 TEST_HEADER include/spdk/init.h 00:02:46.874 TEST_HEADER include/spdk/ioat.h 00:02:46.874 TEST_HEADER include/spdk/ioat_spec.h 00:02:46.874 TEST_HEADER include/spdk/iscsi_spec.h 00:02:46.874 TEST_HEADER include/spdk/json.h 00:02:46.874 TEST_HEADER include/spdk/jsonrpc.h 00:02:46.874 TEST_HEADER include/spdk/keyring.h 00:02:46.874 TEST_HEADER include/spdk/keyring_module.h 00:02:46.874 TEST_HEADER include/spdk/likely.h 00:02:46.874 TEST_HEADER include/spdk/lvol.h 00:02:46.874 CC app/spdk_tgt/spdk_tgt.o 00:02:46.874 TEST_HEADER include/spdk/log.h 00:02:46.874 TEST_HEADER include/spdk/memory.h 00:02:46.874 TEST_HEADER include/spdk/mmio.h 00:02:46.874 TEST_HEADER include/spdk/nbd.h 00:02:46.874 TEST_HEADER include/spdk/notify.h 00:02:46.874 TEST_HEADER include/spdk/nvme.h 00:02:46.874 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:46.874 TEST_HEADER include/spdk/nvme_intel.h 00:02:46.874 TEST_HEADER include/spdk/nvme_spec.h 00:02:46.874 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.135 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.135 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.135 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.135 TEST_HEADER include/spdk/nvmf.h 00:02:47.135 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.135 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.135 TEST_HEADER include/spdk/opal.h 00:02:47.135 TEST_HEADER include/spdk/pci_ids.h 00:02:47.135 TEST_HEADER include/spdk/opal_spec.h 00:02:47.135 TEST_HEADER include/spdk/queue.h 00:02:47.135 TEST_HEADER include/spdk/pipe.h 00:02:47.135 TEST_HEADER include/spdk/reduce.h 00:02:47.135 TEST_HEADER include/spdk/rpc.h 00:02:47.135 TEST_HEADER include/spdk/scheduler.h 00:02:47.135 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.135 TEST_HEADER include/spdk/scsi.h 00:02:47.135 TEST_HEADER include/spdk/sock.h 00:02:47.135 TEST_HEADER include/spdk/stdinc.h 00:02:47.135 TEST_HEADER include/spdk/string.h 00:02:47.135 TEST_HEADER include/spdk/thread.h 00:02:47.135 TEST_HEADER include/spdk/trace.h 00:02:47.135 TEST_HEADER include/spdk/trace_parser.h 00:02:47.135 TEST_HEADER include/spdk/tree.h 00:02:47.135 TEST_HEADER include/spdk/ublk.h 00:02:47.135 TEST_HEADER include/spdk/util.h 00:02:47.135 TEST_HEADER include/spdk/uuid.h 00:02:47.135 TEST_HEADER include/spdk/version.h 00:02:47.135 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.135 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.135 TEST_HEADER include/spdk/vhost.h 00:02:47.135 TEST_HEADER include/spdk/vmd.h 00:02:47.135 TEST_HEADER include/spdk/xor.h 00:02:47.135 TEST_HEADER include/spdk/zipf.h 00:02:47.135 CXX test/cpp_headers/accel.o 00:02:47.135 CXX test/cpp_headers/accel_module.o 00:02:47.135 CXX test/cpp_headers/assert.o 00:02:47.135 CXX test/cpp_headers/barrier.o 00:02:47.135 CXX test/cpp_headers/base64.o 00:02:47.135 CXX test/cpp_headers/bdev.o 00:02:47.135 CXX test/cpp_headers/bdev_module.o 00:02:47.135 CXX test/cpp_headers/bdev_zone.o 00:02:47.135 CXX test/cpp_headers/bit_array.o 00:02:47.135 CXX test/cpp_headers/bit_pool.o 00:02:47.135 CXX test/cpp_headers/blob_bdev.o 00:02:47.135 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.135 CXX test/cpp_headers/blobfs.o 00:02:47.135 CXX test/cpp_headers/blob.o 00:02:47.135 CXX test/cpp_headers/conf.o 00:02:47.135 CXX test/cpp_headers/config.o 00:02:47.135 CXX test/cpp_headers/cpuset.o 00:02:47.135 CXX test/cpp_headers/crc16.o 00:02:47.135 CXX test/cpp_headers/crc32.o 00:02:47.135 CXX test/cpp_headers/crc64.o 00:02:47.135 CXX test/cpp_headers/dif.o 00:02:47.135 CXX test/cpp_headers/dma.o 00:02:47.135 CXX test/cpp_headers/endian.o 00:02:47.135 CXX test/cpp_headers/env_dpdk.o 00:02:47.135 CC examples/ioat/verify/verify.o 00:02:47.135 CXX test/cpp_headers/env.o 00:02:47.135 CXX test/cpp_headers/event.o 00:02:47.135 CXX test/cpp_headers/fd_group.o 00:02:47.135 CXX test/cpp_headers/fd.o 00:02:47.135 CXX test/cpp_headers/gpt_spec.o 00:02:47.135 CXX test/cpp_headers/ftl.o 00:02:47.135 CXX test/cpp_headers/file.o 00:02:47.135 CXX test/cpp_headers/hexlify.o 00:02:47.135 CXX test/cpp_headers/idxd.o 00:02:47.135 CXX test/cpp_headers/histogram_data.o 00:02:47.135 CXX test/cpp_headers/idxd_spec.o 00:02:47.135 CXX test/cpp_headers/init.o 00:02:47.135 LINK spdk_lspci 00:02:47.135 CXX test/cpp_headers/ioat_spec.o 00:02:47.135 CC examples/ioat/perf/perf.o 00:02:47.135 CXX test/cpp_headers/ioat.o 00:02:47.135 CXX test/cpp_headers/json.o 00:02:47.135 CXX test/cpp_headers/jsonrpc.o 00:02:47.135 CC test/app/stub/stub.o 00:02:47.135 CC examples/util/zipf/zipf.o 00:02:47.135 CXX test/cpp_headers/iscsi_spec.o 00:02:47.135 CXX test/cpp_headers/keyring_module.o 00:02:47.135 CXX test/cpp_headers/likely.o 00:02:47.135 CXX test/cpp_headers/keyring.o 00:02:47.135 CXX test/cpp_headers/log.o 00:02:47.135 CXX test/cpp_headers/lvol.o 00:02:47.135 CC test/app/jsoncat/jsoncat.o 00:02:47.135 CXX test/cpp_headers/memory.o 00:02:47.135 CXX test/cpp_headers/mmio.o 00:02:47.136 CXX test/cpp_headers/nvme.o 00:02:47.136 CXX test/cpp_headers/notify.o 00:02:47.136 CXX test/cpp_headers/nvme_ocssd.o 00:02:47.136 CXX test/cpp_headers/nbd.o 00:02:47.136 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:47.136 CXX test/cpp_headers/nvme_intel.o 00:02:47.136 CC test/thread/poller_perf/poller_perf.o 00:02:47.136 CXX test/cpp_headers/nvme_spec.o 00:02:47.136 CXX test/cpp_headers/nvme_zns.o 00:02:47.136 CXX test/cpp_headers/nvmf_cmd.o 00:02:47.136 CC test/app/histogram_perf/histogram_perf.o 00:02:47.136 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:47.136 CXX test/cpp_headers/nvmf.o 00:02:47.136 CXX test/cpp_headers/nvmf_spec.o 00:02:47.136 CXX test/cpp_headers/pci_ids.o 00:02:47.136 CXX test/cpp_headers/opal.o 00:02:47.136 CXX test/cpp_headers/nvmf_transport.o 00:02:47.136 CXX test/cpp_headers/opal_spec.o 00:02:47.136 CXX test/cpp_headers/pipe.o 00:02:47.136 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.136 CXX test/cpp_headers/rpc.o 00:02:47.136 CXX test/cpp_headers/reduce.o 00:02:47.136 CC test/dma/test_dma/test_dma.o 00:02:47.136 CXX test/cpp_headers/queue.o 00:02:47.136 CXX test/cpp_headers/scheduler.o 00:02:47.136 CC test/env/vtophys/vtophys.o 00:02:47.136 CC test/env/memory/memory_ut.o 00:02:47.136 CXX test/cpp_headers/stdinc.o 00:02:47.136 CXX test/cpp_headers/sock.o 00:02:47.136 CXX test/cpp_headers/scsi.o 00:02:47.136 CXX test/cpp_headers/scsi_spec.o 00:02:47.136 CC test/env/pci/pci_ut.o 00:02:47.136 CXX test/cpp_headers/string.o 00:02:47.136 CXX test/cpp_headers/thread.o 00:02:47.136 CC app/fio/nvme/fio_plugin.o 00:02:47.136 CXX test/cpp_headers/trace_parser.o 00:02:47.136 CXX test/cpp_headers/trace.o 00:02:47.136 CXX test/cpp_headers/ublk.o 00:02:47.136 CXX test/cpp_headers/tree.o 00:02:47.136 CXX test/cpp_headers/util.o 00:02:47.136 CXX test/cpp_headers/uuid.o 00:02:47.136 CXX test/cpp_headers/version.o 00:02:47.136 CXX test/cpp_headers/vfio_user_pci.o 00:02:47.136 CXX test/cpp_headers/vfio_user_spec.o 00:02:47.136 CXX test/cpp_headers/vmd.o 00:02:47.136 CXX test/cpp_headers/vhost.o 00:02:47.136 CXX test/cpp_headers/xor.o 00:02:47.136 CXX test/cpp_headers/zipf.o 00:02:47.136 CC test/app/bdev_svc/bdev_svc.o 00:02:47.136 LINK spdk_nvme_discover 00:02:47.136 CC app/fio/bdev/fio_plugin.o 00:02:47.136 LINK rpc_client_test 00:02:47.397 LINK nvmf_tgt 00:02:47.397 LINK spdk_trace_record 00:02:47.397 LINK interrupt_tgt 00:02:47.397 LINK iscsi_tgt 00:02:47.397 LINK spdk_tgt 00:02:47.397 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.397 LINK spdk_trace 00:02:47.397 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.397 LINK jsoncat 00:02:47.656 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.656 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.656 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.656 LINK zipf 00:02:47.656 LINK stub 00:02:47.656 LINK spdk_dd 00:02:47.656 LINK histogram_perf 00:02:47.656 LINK verify 00:02:47.656 LINK bdev_svc 00:02:47.656 LINK env_dpdk_post_init 00:02:47.656 LINK ioat_perf 00:02:47.656 LINK poller_perf 00:02:47.656 LINK vtophys 00:02:47.916 CC app/vhost/vhost.o 00:02:47.916 LINK spdk_nvme_perf 00:02:47.916 LINK pci_ut 00:02:47.916 LINK nvme_fuzz 00:02:48.175 LINK test_dma 00:02:48.175 CC examples/idxd/perf/perf.o 00:02:48.175 CC examples/vmd/lsvmd/lsvmd.o 00:02:48.175 CC examples/vmd/led/led.o 00:02:48.175 CC examples/sock/hello_world/hello_sock.o 00:02:48.175 LINK vhost_fuzz 00:02:48.175 CC examples/thread/thread/thread_ex.o 00:02:48.175 LINK spdk_nvme 00:02:48.175 LINK vhost 00:02:48.175 LINK spdk_bdev 00:02:48.175 LINK mem_callbacks 00:02:48.175 LINK lsvmd 00:02:48.175 LINK spdk_top 00:02:48.175 LINK spdk_nvme_identify 00:02:48.175 CC test/event/event_perf/event_perf.o 00:02:48.175 CC test/event/reactor/reactor.o 00:02:48.175 CC test/event/reactor_perf/reactor_perf.o 00:02:48.175 LINK led 00:02:48.175 CC test/event/app_repeat/app_repeat.o 00:02:48.175 CC test/event/scheduler/scheduler.o 00:02:48.434 LINK thread 00:02:48.434 LINK hello_sock 00:02:48.434 LINK idxd_perf 00:02:48.434 LINK reactor_perf 00:02:48.434 LINK reactor 00:02:48.434 LINK event_perf 00:02:48.434 LINK app_repeat 00:02:48.434 LINK scheduler 00:02:48.434 CC test/nvme/e2edp/nvme_dp.o 00:02:48.434 CC test/nvme/overhead/overhead.o 00:02:48.692 CC test/nvme/reset/reset.o 00:02:48.692 CC test/nvme/fdp/fdp.o 00:02:48.692 CC test/nvme/sgl/sgl.o 00:02:48.692 CC test/nvme/compliance/nvme_compliance.o 00:02:48.693 CC test/nvme/cuse/cuse.o 00:02:48.693 CC test/nvme/aer/aer.o 00:02:48.693 CC test/nvme/startup/startup.o 00:02:48.693 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.693 CC test/nvme/reserve/reserve.o 00:02:48.693 CC test/nvme/err_injection/err_injection.o 00:02:48.693 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.693 CC test/nvme/boot_partition/boot_partition.o 00:02:48.693 CC test/nvme/simple_copy/simple_copy.o 00:02:48.693 CC test/nvme/connect_stress/connect_stress.o 00:02:48.693 CC test/accel/dif/dif.o 00:02:48.693 LINK memory_ut 00:02:48.693 CC test/blobfs/mkfs/mkfs.o 00:02:48.693 CC test/lvol/esnap/esnap.o 00:02:48.693 CC examples/nvme/arbitration/arbitration.o 00:02:48.693 LINK reserve 00:02:48.693 CC examples/nvme/reconnect/reconnect.o 00:02:48.693 LINK startup 00:02:48.693 CC examples/nvme/hello_world/hello_world.o 00:02:48.693 LINK err_injection 00:02:48.693 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.693 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:48.693 LINK doorbell_aers 00:02:48.693 LINK boot_partition 00:02:48.693 CC examples/nvme/abort/abort.o 00:02:48.693 CC examples/nvme/hotplug/hotplug.o 00:02:48.693 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:48.693 LINK fused_ordering 00:02:48.952 LINK connect_stress 00:02:48.952 LINK reset 00:02:48.952 CC examples/accel/perf/accel_perf.o 00:02:48.952 LINK sgl 00:02:48.952 LINK nvme_dp 00:02:48.952 LINK overhead 00:02:48.952 LINK mkfs 00:02:48.952 LINK simple_copy 00:02:48.952 LINK aer 00:02:48.952 LINK fdp 00:02:48.952 LINK nvme_compliance 00:02:48.952 CC examples/blob/hello_world/hello_blob.o 00:02:48.952 CC examples/blob/cli/blobcli.o 00:02:48.952 LINK iscsi_fuzz 00:02:48.952 LINK cmb_copy 00:02:48.952 LINK pmr_persistence 00:02:48.952 LINK dif 00:02:48.952 LINK hello_world 00:02:48.952 LINK hotplug 00:02:49.213 LINK arbitration 00:02:49.213 LINK reconnect 00:02:49.213 LINK hello_blob 00:02:49.213 LINK abort 00:02:49.213 LINK nvme_manage 00:02:49.213 LINK accel_perf 00:02:49.475 LINK blobcli 00:02:49.736 CC test/bdev/bdevio/bdevio.o 00:02:49.736 LINK cuse 00:02:49.736 CC examples/bdev/hello_world/hello_bdev.o 00:02:49.736 CC examples/bdev/bdevperf/bdevperf.o 00:02:49.996 LINK bdevio 00:02:49.996 LINK hello_bdev 00:02:50.568 LINK bdevperf 00:02:51.139 CC examples/nvmf/nvmf/nvmf.o 00:02:51.399 LINK nvmf 00:02:52.827 LINK esnap 00:02:53.397 00:02:53.397 real 0m50.425s 00:02:53.397 user 6m23.548s 00:02:53.397 sys 3m59.620s 00:02:53.397 14:45:09 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:53.397 14:45:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:53.397 ************************************ 00:02:53.397 END TEST make 00:02:53.397 ************************************ 00:02:53.397 14:45:09 -- common/autotest_common.sh@1142 -- $ return 0 00:02:53.397 14:45:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:53.397 14:45:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:53.397 14:45:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:53.397 14:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:53.397 14:45:09 -- pm/common@44 -- $ pid=1489034 00:02:53.397 14:45:09 -- pm/common@50 -- $ kill -TERM 1489034 00:02:53.397 14:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:53.397 14:45:09 -- pm/common@44 -- $ pid=1489035 00:02:53.397 14:45:09 -- pm/common@50 -- $ kill -TERM 1489035 00:02:53.397 14:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:53.397 14:45:09 -- pm/common@44 -- $ pid=1489037 00:02:53.397 14:45:09 -- pm/common@50 -- $ kill -TERM 1489037 00:02:53.397 14:45:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:53.397 14:45:09 -- pm/common@44 -- $ pid=1489061 00:02:53.397 14:45:09 -- pm/common@50 -- $ sudo -E kill -TERM 1489061 00:02:53.397 14:45:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:53.397 14:45:09 -- nvmf/common.sh@7 -- # uname -s 00:02:53.397 14:45:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:53.397 14:45:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:53.397 14:45:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:53.397 14:45:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:53.397 14:45:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:53.397 14:45:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:53.397 14:45:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:53.397 14:45:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:53.397 14:45:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:53.397 14:45:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:53.397 14:45:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:53.397 14:45:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:53.397 14:45:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:53.397 14:45:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:53.397 14:45:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:53.397 14:45:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:53.397 14:45:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:53.397 14:45:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:53.397 14:45:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.397 14:45:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.397 14:45:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.397 14:45:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.397 14:45:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.397 14:45:09 -- paths/export.sh@5 -- # export PATH 00:02:53.397 14:45:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.397 14:45:09 -- nvmf/common.sh@47 -- # : 0 00:02:53.397 14:45:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:53.397 14:45:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:53.397 14:45:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:53.397 14:45:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:53.397 14:45:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:53.397 14:45:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:53.397 14:45:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:53.397 14:45:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:53.397 14:45:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:53.397 14:45:09 -- spdk/autotest.sh@32 -- # uname -s 00:02:53.397 14:45:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:53.397 14:45:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:53.397 14:45:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:53.397 14:45:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:53.397 14:45:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:53.397 14:45:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:53.397 14:45:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:53.397 14:45:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:53.397 14:45:09 -- spdk/autotest.sh@48 -- # udevadm_pid=1551707 00:02:53.397 14:45:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:53.397 14:45:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:53.397 14:45:09 -- pm/common@17 -- # local monitor 00:02:53.397 14:45:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.397 14:45:09 -- pm/common@21 -- # date +%s 00:02:53.397 14:45:09 -- pm/common@25 -- # sleep 1 00:02:53.397 14:45:09 -- pm/common@21 -- # date +%s 00:02:53.397 14:45:09 -- pm/common@21 -- # date +%s 00:02:53.397 14:45:09 -- pm/common@21 -- # date +%s 00:02:53.397 14:45:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047509 00:02:53.397 14:45:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047509 00:02:53.397 14:45:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047509 00:02:53.397 14:45:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047509 00:02:53.397 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047509_collect-vmstat.pm.log 00:02:53.397 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047509_collect-cpu-load.pm.log 00:02:53.397 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047509_collect-cpu-temp.pm.log 00:02:53.397 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047509_collect-bmc-pm.bmc.pm.log 00:02:54.340 14:45:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:54.340 14:45:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:54.340 14:45:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:54.340 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:02:54.602 14:45:10 -- spdk/autotest.sh@59 -- # create_test_list 00:02:54.602 14:45:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:54.602 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:02:54.602 14:45:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:54.602 14:45:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:54.602 14:45:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:54.602 14:45:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:54.602 14:45:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:54.602 14:45:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:54.602 14:45:10 -- common/autotest_common.sh@1455 -- # uname 00:02:54.602 14:45:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:54.602 14:45:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:54.602 14:45:10 -- common/autotest_common.sh@1475 -- # uname 00:02:54.602 14:45:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:54.602 14:45:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:54.602 14:45:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:54.602 14:45:10 -- spdk/autotest.sh@72 -- # hash lcov 00:02:54.602 14:45:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:54.602 14:45:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:54.602 --rc lcov_branch_coverage=1 00:02:54.602 --rc lcov_function_coverage=1 00:02:54.602 --rc genhtml_branch_coverage=1 00:02:54.602 --rc genhtml_function_coverage=1 00:02:54.602 --rc genhtml_legend=1 00:02:54.602 --rc geninfo_all_blocks=1 00:02:54.602 ' 00:02:54.602 14:45:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:54.602 --rc lcov_branch_coverage=1 00:02:54.602 --rc lcov_function_coverage=1 00:02:54.602 --rc genhtml_branch_coverage=1 00:02:54.602 --rc genhtml_function_coverage=1 00:02:54.602 --rc genhtml_legend=1 00:02:54.602 --rc geninfo_all_blocks=1 00:02:54.602 ' 00:02:54.602 14:45:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:54.602 --rc lcov_branch_coverage=1 00:02:54.602 --rc lcov_function_coverage=1 00:02:54.602 --rc genhtml_branch_coverage=1 00:02:54.602 --rc genhtml_function_coverage=1 00:02:54.602 --rc genhtml_legend=1 00:02:54.602 --rc geninfo_all_blocks=1 00:02:54.602 --no-external' 00:02:54.602 14:45:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:54.602 --rc lcov_branch_coverage=1 00:02:54.602 --rc lcov_function_coverage=1 00:02:54.602 --rc genhtml_branch_coverage=1 00:02:54.602 --rc genhtml_function_coverage=1 00:02:54.602 --rc genhtml_legend=1 00:02:54.602 --rc geninfo_all_blocks=1 00:02:54.602 --no-external' 00:02:54.602 14:45:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:54.602 lcov: LCOV version 1.14 00:02:54.602 14:45:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:09.506 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:09.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:19.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:19.503 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:19.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:19.504 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:22.877 14:45:38 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:22.877 14:45:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:22.877 14:45:38 -- common/autotest_common.sh@10 -- # set +x 00:03:22.877 14:45:38 -- spdk/autotest.sh@91 -- # rm -f 00:03:22.877 14:45:38 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.079 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:27.079 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:27.079 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:27.079 14:45:42 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:27.079 14:45:42 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.079 14:45:42 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.079 14:45:42 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.079 14:45:42 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.079 14:45:42 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:27.079 14:45:42 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:27.079 14:45:42 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.079 14:45:42 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.079 14:45:42 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:27.079 14:45:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.079 14:45:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.079 14:45:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:27.079 14:45:42 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:27.079 14:45:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.079 No valid GPT data, bailing 00:03:27.079 14:45:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.079 14:45:42 -- scripts/common.sh@391 -- # pt= 00:03:27.079 14:45:42 -- scripts/common.sh@392 -- # return 1 00:03:27.079 14:45:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.079 1+0 records in 00:03:27.079 1+0 records out 00:03:27.079 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00168544 s, 622 MB/s 00:03:27.079 14:45:42 -- spdk/autotest.sh@118 -- # sync 00:03:27.079 14:45:42 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.079 14:45:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.079 14:45:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:35.211 14:45:50 -- spdk/autotest.sh@124 -- # uname -s 00:03:35.211 14:45:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:35.211 14:45:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.211 14:45:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.211 14:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.211 14:45:50 -- common/autotest_common.sh@10 -- # set +x 00:03:35.211 ************************************ 00:03:35.211 START TEST setup.sh 00:03:35.211 ************************************ 00:03:35.211 14:45:50 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.211 * Looking for test storage... 00:03:35.211 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:35.211 14:45:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:35.211 14:45:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:35.211 14:45:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:35.211 14:45:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.211 14:45:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.211 14:45:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.211 ************************************ 00:03:35.211 START TEST acl 00:03:35.211 ************************************ 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:35.211 * Looking for test storage... 00:03:35.211 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:35.211 14:45:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.211 14:45:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.211 14:45:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:35.211 14:45:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:35.211 14:45:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:35.211 14:45:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:35.211 14:45:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:35.211 14:45:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.211 14:45:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.416 14:45:55 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.416 14:45:55 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.416 14:45:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.416 14:45:55 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.416 14:45:55 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.416 14:45:55 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:42.720 Hugepages 00:03:42.720 node hugesize free / total 00:03:42.720 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.720 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.720 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 00:03:42.982 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.982 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:42.983 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.983 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.983 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.983 14:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:42.983 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.983 14:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.983 14:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.983 14:45:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:42.983 14:45:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.983 14:45:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.983 14:45:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.983 14:45:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:42.983 14:45:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:42.983 14:45:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.983 14:45:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.983 14:45:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.983 ************************************ 00:03:42.983 START TEST denied 00:03:42.983 ************************************ 00:03:42.983 14:45:59 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:42.983 14:45:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:42.983 14:45:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:42.983 14:45:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:43.244 14:45:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.244 14:45:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:47.446 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.446 14:46:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.733 00:03:52.733 real 0m8.774s 00:03:52.733 user 0m2.952s 00:03:52.733 sys 0m5.144s 00:03:52.733 14:46:07 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.733 14:46:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:52.733 ************************************ 00:03:52.733 END TEST denied 00:03:52.733 ************************************ 00:03:52.733 14:46:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:52.733 14:46:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:52.733 14:46:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.733 14:46:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.733 14:46:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:52.733 ************************************ 00:03:52.733 START TEST allowed 00:03:52.733 ************************************ 00:03:52.733 14:46:07 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:52.733 14:46:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:52.733 14:46:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:52.733 14:46:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:52.733 14:46:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.733 14:46:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:58.049 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:58.049 14:46:13 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:58.049 14:46:13 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:58.049 14:46:13 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:58.049 14:46:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.049 14:46:13 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.253 00:04:02.253 real 0m9.628s 00:04:02.253 user 0m2.781s 00:04:02.253 sys 0m5.122s 00:04:02.253 14:46:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.253 14:46:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:02.253 ************************************ 00:04:02.253 END TEST allowed 00:04:02.253 ************************************ 00:04:02.253 14:46:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:02.253 00:04:02.253 real 0m26.752s 00:04:02.253 user 0m8.891s 00:04:02.253 sys 0m15.669s 00:04:02.254 14:46:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.254 14:46:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:02.254 ************************************ 00:04:02.254 END TEST acl 00:04:02.254 ************************************ 00:04:02.254 14:46:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.254 14:46:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.254 14:46:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.254 14:46:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.254 14:46:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.254 ************************************ 00:04:02.254 START TEST hugepages 00:04:02.254 ************************************ 00:04:02.254 14:46:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:02.254 * Looking for test storage... 00:04:02.254 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106195448 kB' 'MemAvailable: 109926000 kB' 'Buffers: 4132 kB' 'Cached: 10644472 kB' 'SwapCached: 0 kB' 'Active: 7588360 kB' 'Inactive: 3701232 kB' 'Active(anon): 7096928 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643928 kB' 'Mapped: 197364 kB' 'Shmem: 6455940 kB' 'KReclaimable: 578832 kB' 'Slab: 1459320 kB' 'SReclaimable: 578832 kB' 'SUnreclaim: 880488 kB' 'KernelStack: 27792 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8706500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238124 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.254 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.255 14:46:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:02.255 14:46:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.255 14:46:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.255 14:46:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.256 ************************************ 00:04:02.256 START TEST default_setup 00:04:02.256 ************************************ 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.256 14:46:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:05.550 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:05.551 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:05.815 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:05.815 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:05.815 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:05.815 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:05.815 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:05.815 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:05.815 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108390888 kB' 'MemAvailable: 112121344 kB' 'Buffers: 4132 kB' 'Cached: 10644588 kB' 'SwapCached: 0 kB' 'Active: 7602524 kB' 'Inactive: 3701232 kB' 'Active(anon): 7111092 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657832 kB' 'Mapped: 197528 kB' 'Shmem: 6456056 kB' 'KReclaimable: 578736 kB' 'Slab: 1456496 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877760 kB' 'KernelStack: 27824 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8723868 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238236 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.815 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.816 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108389328 kB' 'MemAvailable: 112119784 kB' 'Buffers: 4132 kB' 'Cached: 10644592 kB' 'SwapCached: 0 kB' 'Active: 7603556 kB' 'Inactive: 3701232 kB' 'Active(anon): 7112124 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659360 kB' 'Mapped: 197956 kB' 'Shmem: 6456060 kB' 'KReclaimable: 578736 kB' 'Slab: 1456480 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877744 kB' 'KernelStack: 27776 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8726016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238172 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.817 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.818 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108387060 kB' 'MemAvailable: 112117516 kB' 'Buffers: 4132 kB' 'Cached: 10644592 kB' 'SwapCached: 0 kB' 'Active: 7606512 kB' 'Inactive: 3701232 kB' 'Active(anon): 7115080 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662436 kB' 'Mapped: 197956 kB' 'Shmem: 6456060 kB' 'KReclaimable: 578736 kB' 'Slab: 1456480 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877744 kB' 'KernelStack: 27856 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8728560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238140 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.819 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:05.820 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.130 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.131 nr_hugepages=1024 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.131 resv_hugepages=0 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.131 surplus_hugepages=0 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.131 anon_hugepages=0 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.131 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108382320 kB' 'MemAvailable: 112112776 kB' 'Buffers: 4132 kB' 'Cached: 10644632 kB' 'SwapCached: 0 kB' 'Active: 7602188 kB' 'Inactive: 3701232 kB' 'Active(anon): 7110756 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658036 kB' 'Mapped: 197452 kB' 'Shmem: 6456100 kB' 'KReclaimable: 578736 kB' 'Slab: 1456480 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877744 kB' 'KernelStack: 27808 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8723560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238140 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.132 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.133 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59505908 kB' 'MemUsed: 6153100 kB' 'SwapCached: 0 kB' 'Active: 1445760 kB' 'Inactive: 285928 kB' 'Active(anon): 1288012 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1605224 kB' 'Mapped: 41868 kB' 'AnonPages: 129768 kB' 'Shmem: 1161548 kB' 'KernelStack: 13656 kB' 'PageTables: 3140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324592 kB' 'Slab: 756384 kB' 'SReclaimable: 324592 kB' 'SUnreclaim: 431792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.134 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.135 node0=1024 expecting 1024 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.135 00:04:06.135 real 0m4.121s 00:04:06.135 user 0m1.552s 00:04:06.135 sys 0m2.570s 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.135 14:46:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:06.135 ************************************ 00:04:06.135 END TEST default_setup 00:04:06.135 ************************************ 00:04:06.135 14:46:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.135 14:46:21 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:06.135 14:46:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.135 14:46:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.135 14:46:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.135 ************************************ 00:04:06.135 START TEST per_node_1G_alloc 00:04:06.135 ************************************ 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.135 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.136 14:46:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:10.345 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.345 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.345 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108393212 kB' 'MemAvailable: 112123668 kB' 'Buffers: 4132 kB' 'Cached: 10644748 kB' 'SwapCached: 0 kB' 'Active: 7602240 kB' 'Inactive: 3701232 kB' 'Active(anon): 7110808 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657844 kB' 'Mapped: 196320 kB' 'Shmem: 6456216 kB' 'KReclaimable: 578736 kB' 'Slab: 1456404 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877668 kB' 'KernelStack: 28080 kB' 'PageTables: 9636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8714440 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238396 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.346 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108395276 kB' 'MemAvailable: 112125732 kB' 'Buffers: 4132 kB' 'Cached: 10644748 kB' 'SwapCached: 0 kB' 'Active: 7603152 kB' 'Inactive: 3701232 kB' 'Active(anon): 7111720 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658328 kB' 'Mapped: 196408 kB' 'Shmem: 6456216 kB' 'KReclaimable: 578736 kB' 'Slab: 1456392 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877656 kB' 'KernelStack: 27936 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8712936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238412 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.347 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108393084 kB' 'MemAvailable: 112123540 kB' 'Buffers: 4132 kB' 'Cached: 10644748 kB' 'SwapCached: 0 kB' 'Active: 7603200 kB' 'Inactive: 3701232 kB' 'Active(anon): 7111768 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658444 kB' 'Mapped: 196408 kB' 'Shmem: 6456216 kB' 'KReclaimable: 578736 kB' 'Slab: 1456392 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877656 kB' 'KernelStack: 28000 kB' 'PageTables: 9592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8729536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238428 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.348 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.349 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.350 nr_hugepages=1024 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.350 resv_hugepages=0 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.350 surplus_hugepages=0 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.350 anon_hugepages=0 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108392916 kB' 'MemAvailable: 112123372 kB' 'Buffers: 4132 kB' 'Cached: 10644788 kB' 'SwapCached: 0 kB' 'Active: 7601480 kB' 'Inactive: 3701232 kB' 'Active(anon): 7110048 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657176 kB' 'Mapped: 196356 kB' 'Shmem: 6456256 kB' 'KReclaimable: 578736 kB' 'Slab: 1456376 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877640 kB' 'KernelStack: 27888 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8712524 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238364 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.350 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60539856 kB' 'MemUsed: 5119152 kB' 'SwapCached: 0 kB' 'Active: 1444624 kB' 'Inactive: 285928 kB' 'Active(anon): 1286876 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1605336 kB' 'Mapped: 41104 kB' 'AnonPages: 128468 kB' 'Shmem: 1161660 kB' 'KernelStack: 13640 kB' 'PageTables: 3008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324592 kB' 'Slab: 756328 kB' 'SReclaimable: 324592 kB' 'SUnreclaim: 431736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.351 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47855696 kB' 'MemUsed: 12824144 kB' 'SwapCached: 0 kB' 'Active: 6157444 kB' 'Inactive: 3415304 kB' 'Active(anon): 5823760 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415304 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9043608 kB' 'Mapped: 155208 kB' 'AnonPages: 528884 kB' 'Shmem: 5294620 kB' 'KernelStack: 14296 kB' 'PageTables: 6628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254144 kB' 'Slab: 700048 kB' 'SReclaimable: 254144 kB' 'SUnreclaim: 445904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.352 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.353 node0=512 expecting 512 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:10.353 node1=512 expecting 512 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.353 00:04:10.353 real 0m4.002s 00:04:10.353 user 0m1.525s 00:04:10.353 sys 0m2.541s 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.353 14:46:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.353 ************************************ 00:04:10.353 END TEST per_node_1G_alloc 00:04:10.353 ************************************ 00:04:10.353 14:46:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.353 14:46:26 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:10.353 14:46:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.353 14:46:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.353 14:46:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.353 ************************************ 00:04:10.353 START TEST even_2G_alloc 00:04:10.353 ************************************ 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.353 14:46:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:14.566 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:14.566 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108399616 kB' 'MemAvailable: 112130072 kB' 'Buffers: 4132 kB' 'Cached: 10644948 kB' 'SwapCached: 0 kB' 'Active: 7603088 kB' 'Inactive: 3701232 kB' 'Active(anon): 7111656 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658040 kB' 'Mapped: 196384 kB' 'Shmem: 6456416 kB' 'KReclaimable: 578736 kB' 'Slab: 1456756 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878020 kB' 'KernelStack: 27808 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8712848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238300 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.566 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.567 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108399112 kB' 'MemAvailable: 112129568 kB' 'Buffers: 4132 kB' 'Cached: 10644948 kB' 'SwapCached: 0 kB' 'Active: 7603824 kB' 'Inactive: 3701232 kB' 'Active(anon): 7112392 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658756 kB' 'Mapped: 196384 kB' 'Shmem: 6456416 kB' 'KReclaimable: 578736 kB' 'Slab: 1456748 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878012 kB' 'KernelStack: 27792 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8712864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238300 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.568 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.569 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108399112 kB' 'MemAvailable: 112129568 kB' 'Buffers: 4132 kB' 'Cached: 10644972 kB' 'SwapCached: 0 kB' 'Active: 7602068 kB' 'Inactive: 3701232 kB' 'Active(anon): 7110636 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657496 kB' 'Mapped: 196304 kB' 'Shmem: 6456440 kB' 'KReclaimable: 578736 kB' 'Slab: 1456740 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878004 kB' 'KernelStack: 27776 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8712888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238300 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.570 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.571 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.572 nr_hugepages=1024 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.572 resv_hugepages=0 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.572 surplus_hugepages=0 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.572 anon_hugepages=0 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108398392 kB' 'MemAvailable: 112128848 kB' 'Buffers: 4132 kB' 'Cached: 10644992 kB' 'SwapCached: 0 kB' 'Active: 7602132 kB' 'Inactive: 3701232 kB' 'Active(anon): 7110700 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657584 kB' 'Mapped: 196304 kB' 'Shmem: 6456460 kB' 'KReclaimable: 578736 kB' 'Slab: 1456740 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878004 kB' 'KernelStack: 27776 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8713156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238300 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.572 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.573 14:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.573 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60537468 kB' 'MemUsed: 5121540 kB' 'SwapCached: 0 kB' 'Active: 1444948 kB' 'Inactive: 285928 kB' 'Active(anon): 1287200 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1605468 kB' 'Mapped: 41088 kB' 'AnonPages: 128616 kB' 'Shmem: 1161792 kB' 'KernelStack: 13624 kB' 'PageTables: 2912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324592 kB' 'Slab: 756392 kB' 'SReclaimable: 324592 kB' 'SUnreclaim: 431800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.574 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47860592 kB' 'MemUsed: 12819248 kB' 'SwapCached: 0 kB' 'Active: 6157924 kB' 'Inactive: 3415304 kB' 'Active(anon): 5824240 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415304 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9043676 kB' 'Mapped: 155216 kB' 'AnonPages: 529864 kB' 'Shmem: 5294688 kB' 'KernelStack: 14120 kB' 'PageTables: 5996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254144 kB' 'Slab: 700348 kB' 'SReclaimable: 254144 kB' 'SUnreclaim: 446204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.575 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.576 node0=512 expecting 512 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:14.576 node1=512 expecting 512 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:14.576 00:04:14.576 real 0m3.973s 00:04:14.576 user 0m1.581s 00:04:14.576 sys 0m2.456s 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.576 14:46:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.576 ************************************ 00:04:14.576 END TEST even_2G_alloc 00:04:14.576 ************************************ 00:04:14.576 14:46:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:14.576 14:46:30 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:14.576 14:46:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.576 14:46:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.576 14:46:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.576 ************************************ 00:04:14.576 START TEST odd_alloc 00:04:14.576 ************************************ 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.576 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.577 14:46:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:17.893 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:17.893 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.893 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108398440 kB' 'MemAvailable: 112128896 kB' 'Buffers: 4132 kB' 'Cached: 10645132 kB' 'SwapCached: 0 kB' 'Active: 7603148 kB' 'Inactive: 3701232 kB' 'Active(anon): 7111716 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658008 kB' 'Mapped: 196428 kB' 'Shmem: 6456600 kB' 'KReclaimable: 578736 kB' 'Slab: 1456440 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877704 kB' 'KernelStack: 27792 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8713800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238380 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.229 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108399304 kB' 'MemAvailable: 112129760 kB' 'Buffers: 4132 kB' 'Cached: 10645132 kB' 'SwapCached: 0 kB' 'Active: 7603592 kB' 'Inactive: 3701232 kB' 'Active(anon): 7112160 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658420 kB' 'Mapped: 196428 kB' 'Shmem: 6456600 kB' 'KReclaimable: 578736 kB' 'Slab: 1456432 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877696 kB' 'KernelStack: 27792 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8713816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238348 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.230 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.231 14:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108399340 kB' 'MemAvailable: 112129796 kB' 'Buffers: 4132 kB' 'Cached: 10645152 kB' 'SwapCached: 0 kB' 'Active: 7602560 kB' 'Inactive: 3701232 kB' 'Active(anon): 7111128 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657824 kB' 'Mapped: 196320 kB' 'Shmem: 6456620 kB' 'KReclaimable: 578736 kB' 'Slab: 1456456 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877720 kB' 'KernelStack: 27744 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8713820 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238316 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.231 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:18.232 nr_hugepages=1025 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.232 resv_hugepages=0 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.232 surplus_hugepages=0 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.232 anon_hugepages=0 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108400036 kB' 'MemAvailable: 112130492 kB' 'Buffers: 4132 kB' 'Cached: 10645188 kB' 'SwapCached: 0 kB' 'Active: 7602200 kB' 'Inactive: 3701232 kB' 'Active(anon): 7110768 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657420 kB' 'Mapped: 196320 kB' 'Shmem: 6456656 kB' 'KReclaimable: 578736 kB' 'Slab: 1456456 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877720 kB' 'KernelStack: 27728 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8713840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238300 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.232 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60548960 kB' 'MemUsed: 5110048 kB' 'SwapCached: 0 kB' 'Active: 1446840 kB' 'Inactive: 285928 kB' 'Active(anon): 1289092 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1605620 kB' 'Mapped: 41088 kB' 'AnonPages: 130440 kB' 'Shmem: 1161944 kB' 'KernelStack: 13640 kB' 'PageTables: 3008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324592 kB' 'Slab: 756252 kB' 'SReclaimable: 324592 kB' 'SUnreclaim: 431660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.233 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47853892 kB' 'MemUsed: 12825948 kB' 'SwapCached: 0 kB' 'Active: 6155700 kB' 'Inactive: 3415304 kB' 'Active(anon): 5822016 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415304 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9043720 kB' 'Mapped: 155232 kB' 'AnonPages: 527384 kB' 'Shmem: 5294732 kB' 'KernelStack: 14120 kB' 'PageTables: 5972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254144 kB' 'Slab: 700204 kB' 'SReclaimable: 254144 kB' 'SUnreclaim: 446060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.234 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:18.235 node0=512 expecting 513 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:18.235 node1=513 expecting 512 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:18.235 00:04:18.235 real 0m3.969s 00:04:18.235 user 0m1.529s 00:04:18.235 sys 0m2.500s 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.235 14:46:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.235 ************************************ 00:04:18.235 END TEST odd_alloc 00:04:18.235 ************************************ 00:04:18.235 14:46:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:18.235 14:46:34 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:18.235 14:46:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.235 14:46:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.235 14:46:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.235 ************************************ 00:04:18.235 START TEST custom_alloc 00:04:18.235 ************************************ 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.235 14:46:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:22.444 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:22.444 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:22.444 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:22.444 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:22.444 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:22.444 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:22.445 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107391680 kB' 'MemAvailable: 111122136 kB' 'Buffers: 4132 kB' 'Cached: 10645300 kB' 'SwapCached: 0 kB' 'Active: 7605496 kB' 'Inactive: 3701232 kB' 'Active(anon): 7114064 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659960 kB' 'Mapped: 196780 kB' 'Shmem: 6456768 kB' 'KReclaimable: 578736 kB' 'Slab: 1456556 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877820 kB' 'KernelStack: 28048 kB' 'PageTables: 9592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8718584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238460 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.445 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107392004 kB' 'MemAvailable: 111122460 kB' 'Buffers: 4132 kB' 'Cached: 10645304 kB' 'SwapCached: 0 kB' 'Active: 7607808 kB' 'Inactive: 3701232 kB' 'Active(anon): 7116376 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662840 kB' 'Mapped: 196764 kB' 'Shmem: 6456772 kB' 'KReclaimable: 578736 kB' 'Slab: 1456504 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877768 kB' 'KernelStack: 28112 kB' 'PageTables: 9540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8719912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238380 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.446 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.447 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.448 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107393060 kB' 'MemAvailable: 111123516 kB' 'Buffers: 4132 kB' 'Cached: 10645324 kB' 'SwapCached: 0 kB' 'Active: 7609284 kB' 'Inactive: 3701232 kB' 'Active(anon): 7117852 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664236 kB' 'Mapped: 196860 kB' 'Shmem: 6456792 kB' 'KReclaimable: 578736 kB' 'Slab: 1456560 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877824 kB' 'KernelStack: 27888 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8723780 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238400 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.449 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.450 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:22.451 nr_hugepages=1536 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.451 resv_hugepages=0 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.451 surplus_hugepages=0 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.451 anon_hugepages=0 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107392396 kB' 'MemAvailable: 111122852 kB' 'Buffers: 4132 kB' 'Cached: 10645324 kB' 'SwapCached: 0 kB' 'Active: 7604472 kB' 'Inactive: 3701232 kB' 'Active(anon): 7113040 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659392 kB' 'Mapped: 196356 kB' 'Shmem: 6456792 kB' 'KReclaimable: 578736 kB' 'Slab: 1456560 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 877824 kB' 'KernelStack: 27872 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8717756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238428 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.451 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.452 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60563052 kB' 'MemUsed: 5095956 kB' 'SwapCached: 0 kB' 'Active: 1448192 kB' 'Inactive: 285928 kB' 'Active(anon): 1290444 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1605768 kB' 'Mapped: 41108 kB' 'AnonPages: 131532 kB' 'Shmem: 1162092 kB' 'KernelStack: 13608 kB' 'PageTables: 2960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324592 kB' 'Slab: 756348 kB' 'SReclaimable: 324592 kB' 'SUnreclaim: 431756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.453 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 46831628 kB' 'MemUsed: 13848212 kB' 'SwapCached: 0 kB' 'Active: 6155596 kB' 'Inactive: 3415304 kB' 'Active(anon): 5821912 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415304 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9043736 kB' 'Mapped: 155252 kB' 'AnonPages: 527216 kB' 'Shmem: 5294748 kB' 'KernelStack: 14264 kB' 'PageTables: 6416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254144 kB' 'Slab: 700212 kB' 'SReclaimable: 254144 kB' 'SUnreclaim: 446068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.454 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.455 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.456 node0=512 expecting 512 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:22.456 node1=1024 expecting 1024 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:22.456 00:04:22.456 real 0m4.039s 00:04:22.456 user 0m1.579s 00:04:22.456 sys 0m2.521s 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.456 14:46:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.456 ************************************ 00:04:22.456 END TEST custom_alloc 00:04:22.456 ************************************ 00:04:22.456 14:46:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:22.456 14:46:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:22.456 14:46:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.456 14:46:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.456 14:46:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.456 ************************************ 00:04:22.456 START TEST no_shrink_alloc 00:04:22.456 ************************************ 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.456 14:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:26.665 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:26.665 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108376344 kB' 'MemAvailable: 112106800 kB' 'Buffers: 4132 kB' 'Cached: 10645492 kB' 'SwapCached: 0 kB' 'Active: 7605608 kB' 'Inactive: 3701232 kB' 'Active(anon): 7114176 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660028 kB' 'Mapped: 196476 kB' 'Shmem: 6456960 kB' 'KReclaimable: 578736 kB' 'Slab: 1457080 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878344 kB' 'KernelStack: 27776 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8715872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238348 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:26.665 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.666 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108377720 kB' 'MemAvailable: 112108176 kB' 'Buffers: 4132 kB' 'Cached: 10645496 kB' 'SwapCached: 0 kB' 'Active: 7605348 kB' 'Inactive: 3701232 kB' 'Active(anon): 7113916 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659796 kB' 'Mapped: 196440 kB' 'Shmem: 6456964 kB' 'KReclaimable: 578736 kB' 'Slab: 1457032 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878296 kB' 'KernelStack: 27760 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8715892 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238332 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.667 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.668 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108379940 kB' 'MemAvailable: 112110396 kB' 'Buffers: 4132 kB' 'Cached: 10645512 kB' 'SwapCached: 0 kB' 'Active: 7604968 kB' 'Inactive: 3701232 kB' 'Active(anon): 7113536 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659924 kB' 'Mapped: 196364 kB' 'Shmem: 6456980 kB' 'KReclaimable: 578736 kB' 'Slab: 1457000 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878264 kB' 'KernelStack: 27776 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8716668 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238316 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.669 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.670 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.671 nr_hugepages=1024 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.671 resv_hugepages=0 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.671 surplus_hugepages=0 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.671 anon_hugepages=0 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108381576 kB' 'MemAvailable: 112112032 kB' 'Buffers: 4132 kB' 'Cached: 10645536 kB' 'SwapCached: 0 kB' 'Active: 7604844 kB' 'Inactive: 3701232 kB' 'Active(anon): 7113412 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659772 kB' 'Mapped: 196364 kB' 'Shmem: 6457004 kB' 'KReclaimable: 578736 kB' 'Slab: 1456988 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878252 kB' 'KernelStack: 27760 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8715936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238284 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.671 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.672 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59506148 kB' 'MemUsed: 6152860 kB' 'SwapCached: 0 kB' 'Active: 1446596 kB' 'Inactive: 285928 kB' 'Active(anon): 1288848 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1605928 kB' 'Mapped: 41096 kB' 'AnonPages: 129808 kB' 'Shmem: 1162252 kB' 'KernelStack: 13624 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324592 kB' 'Slab: 756560 kB' 'SReclaimable: 324592 kB' 'SUnreclaim: 431968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.673 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.674 node0=1024 expecting 1024 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.674 14:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:30.883 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:30.883 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:30.883 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108386788 kB' 'MemAvailable: 112117244 kB' 'Buffers: 4132 kB' 'Cached: 10645652 kB' 'SwapCached: 0 kB' 'Active: 7606572 kB' 'Inactive: 3701232 kB' 'Active(anon): 7115140 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661260 kB' 'Mapped: 196420 kB' 'Shmem: 6457120 kB' 'KReclaimable: 578736 kB' 'Slab: 1457280 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878544 kB' 'KernelStack: 27776 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8716796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238268 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.883 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.884 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108386548 kB' 'MemAvailable: 112117004 kB' 'Buffers: 4132 kB' 'Cached: 10645656 kB' 'SwapCached: 0 kB' 'Active: 7606440 kB' 'Inactive: 3701232 kB' 'Active(anon): 7115008 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661144 kB' 'Mapped: 196384 kB' 'Shmem: 6457124 kB' 'KReclaimable: 578736 kB' 'Slab: 1457272 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878536 kB' 'KernelStack: 27776 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8718184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238204 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.885 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.886 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108387248 kB' 'MemAvailable: 112117704 kB' 'Buffers: 4132 kB' 'Cached: 10645672 kB' 'SwapCached: 0 kB' 'Active: 7606008 kB' 'Inactive: 3701232 kB' 'Active(anon): 7114576 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660728 kB' 'Mapped: 196400 kB' 'Shmem: 6457140 kB' 'KReclaimable: 578736 kB' 'Slab: 1457312 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878576 kB' 'KernelStack: 27776 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8719948 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238236 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.887 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.888 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.889 nr_hugepages=1024 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.889 resv_hugepages=0 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.889 surplus_hugepages=0 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.889 anon_hugepages=0 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108387000 kB' 'MemAvailable: 112117456 kB' 'Buffers: 4132 kB' 'Cached: 10645696 kB' 'SwapCached: 0 kB' 'Active: 7606236 kB' 'Inactive: 3701232 kB' 'Active(anon): 7114804 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660980 kB' 'Mapped: 196400 kB' 'Shmem: 6457164 kB' 'KReclaimable: 578736 kB' 'Slab: 1457312 kB' 'SReclaimable: 578736 kB' 'SUnreclaim: 878576 kB' 'KernelStack: 27712 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8718228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238220 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4105588 kB' 'DirectMap2M: 57440256 kB' 'DirectMap1G: 74448896 kB' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.889 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.890 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59513620 kB' 'MemUsed: 6145388 kB' 'SwapCached: 0 kB' 'Active: 1446576 kB' 'Inactive: 285928 kB' 'Active(anon): 1288828 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285928 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1606012 kB' 'Mapped: 41112 kB' 'AnonPages: 129604 kB' 'Shmem: 1162336 kB' 'KernelStack: 13608 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324592 kB' 'Slab: 756728 kB' 'SReclaimable: 324592 kB' 'SUnreclaim: 432136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.891 node0=1024 expecting 1024 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.891 00:04:30.891 real 0m8.022s 00:04:30.891 user 0m3.191s 00:04:30.891 sys 0m4.961s 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.891 14:46:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.891 ************************************ 00:04:30.891 END TEST no_shrink_alloc 00:04:30.891 ************************************ 00:04:30.891 14:46:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.891 14:46:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.891 00:04:30.891 real 0m28.752s 00:04:30.891 user 0m11.217s 00:04:30.891 sys 0m17.948s 00:04:30.891 14:46:46 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.891 14:46:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.891 ************************************ 00:04:30.891 END TEST hugepages 00:04:30.891 ************************************ 00:04:30.891 14:46:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:30.891 14:46:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:30.891 14:46:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.891 14:46:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.891 14:46:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.891 ************************************ 00:04:30.891 START TEST driver 00:04:30.891 ************************************ 00:04:30.891 14:46:46 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:30.891 * Looking for test storage... 00:04:30.891 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:30.891 14:46:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:30.891 14:46:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.891 14:46:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.179 14:46:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.179 14:46:51 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.179 14:46:51 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.179 14:46:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.179 ************************************ 00:04:36.179 START TEST guess_driver 00:04:36.179 ************************************ 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:36.179 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.179 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.179 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.179 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.179 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:36.179 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:36.179 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:36.179 Looking for driver=vfio-pci 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.179 14:46:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.481 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.743 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:39.743 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:39.743 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.743 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:39.743 14:46:55 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:39.743 14:46:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.743 14:46:55 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.059 00:04:45.059 real 0m9.086s 00:04:45.059 user 0m2.958s 00:04:45.059 sys 0m5.393s 00:04:45.059 14:47:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.059 14:47:00 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.059 ************************************ 00:04:45.059 END TEST guess_driver 00:04:45.059 ************************************ 00:04:45.059 14:47:00 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:45.059 00:04:45.059 real 0m14.323s 00:04:45.059 user 0m4.534s 00:04:45.059 sys 0m8.340s 00:04:45.059 14:47:00 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.059 14:47:00 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.059 ************************************ 00:04:45.059 END TEST driver 00:04:45.059 ************************************ 00:04:45.059 14:47:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:45.059 14:47:00 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:45.059 14:47:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.059 14:47:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.059 14:47:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.059 ************************************ 00:04:45.059 START TEST devices 00:04:45.059 ************************************ 00:04:45.059 14:47:00 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:45.059 * Looking for test storage... 00:04:45.059 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:45.059 14:47:00 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.059 14:47:00 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:45.059 14:47:00 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.059 14:47:00 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:49.268 14:47:05 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:49.268 No valid GPT data, bailing 00:04:49.268 14:47:05 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.268 14:47:05 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:49.268 14:47:05 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:49.268 14:47:05 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.268 14:47:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.268 ************************************ 00:04:49.268 START TEST nvme_mount 00:04:49.268 ************************************ 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.269 14:47:05 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:50.211 Creating new GPT entries in memory. 00:04:50.211 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.211 other utilities. 00:04:50.211 14:47:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.211 14:47:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.211 14:47:06 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.211 14:47:06 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.211 14:47:06 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:51.161 Creating new GPT entries in memory. 00:04:51.161 The operation has completed successfully. 00:04:51.161 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:51.161 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.161 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1594808 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.422 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.423 14:47:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:55.629 14:47:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:55.629 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.629 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.630 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:55.630 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:55.630 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.630 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.630 14:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:58.926 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.927 14:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:59.187 14:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.449 14:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.449 14:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:02.812 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:03.073 14:47:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.073 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.073 00:05:03.073 real 0m13.949s 00:05:03.073 user 0m4.331s 00:05:03.073 sys 0m7.522s 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.073 14:47:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:03.073 ************************************ 00:05:03.073 END TEST nvme_mount 00:05:03.073 ************************************ 00:05:03.333 14:47:19 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:03.333 14:47:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:03.333 14:47:19 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.333 14:47:19 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.333 14:47:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.333 ************************************ 00:05:03.333 START TEST dm_mount 00:05:03.333 ************************************ 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.333 14:47:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:04.291 Creating new GPT entries in memory. 00:05:04.291 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:04.291 other utilities. 00:05:04.291 14:47:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:04.291 14:47:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.291 14:47:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.291 14:47:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.291 14:47:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:05.231 Creating new GPT entries in memory. 00:05:05.231 The operation has completed successfully. 00:05:05.231 14:47:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:05.231 14:47:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.231 14:47:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.231 14:47:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.231 14:47:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:06.617 The operation has completed successfully. 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1600446 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.617 14:47:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:09.917 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.177 14:47:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:13.478 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.479 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:13.740 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:13.740 00:05:13.740 real 0m10.561s 00:05:13.740 user 0m2.698s 00:05:13.740 sys 0m4.887s 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.740 14:47:29 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:13.740 ************************************ 00:05:13.740 END TEST dm_mount 00:05:13.740 ************************************ 00:05:14.001 14:47:29 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:14.001 14:47:29 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:14.001 14:47:29 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:14.001 14:47:29 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.001 14:47:29 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.001 14:47:29 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:14.001 14:47:29 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.001 14:47:29 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.262 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:14.262 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:14.262 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.262 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.262 14:47:30 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:14.262 14:47:30 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:14.262 14:47:30 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:14.262 14:47:30 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.262 14:47:30 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:14.262 14:47:30 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.262 14:47:30 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:14.262 00:05:14.262 real 0m29.227s 00:05:14.262 user 0m8.673s 00:05:14.262 sys 0m15.365s 00:05:14.262 14:47:30 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.262 14:47:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:14.262 ************************************ 00:05:14.262 END TEST devices 00:05:14.262 ************************************ 00:05:14.262 14:47:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.262 00:05:14.262 real 1m39.466s 00:05:14.262 user 0m33.469s 00:05:14.262 sys 0m57.604s 00:05:14.262 14:47:30 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.262 14:47:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.262 ************************************ 00:05:14.262 END TEST setup.sh 00:05:14.262 ************************************ 00:05:14.262 14:47:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.262 14:47:30 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:18.469 Hugepages 00:05:18.469 node hugesize free / total 00:05:18.469 node0 1048576kB 0 / 0 00:05:18.469 node0 2048kB 2048 / 2048 00:05:18.469 node1 1048576kB 0 / 0 00:05:18.469 node1 2048kB 0 / 0 00:05:18.469 00:05:18.469 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.469 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:18.469 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:18.469 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:18.469 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:18.469 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:18.469 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:18.469 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:18.469 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:18.469 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:18.469 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:18.469 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:18.469 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:18.469 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:18.469 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:18.469 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:18.469 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:18.469 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:18.469 14:47:34 -- spdk/autotest.sh@130 -- # uname -s 00:05:18.469 14:47:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:18.469 14:47:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:18.469 14:47:34 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:21.774 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:21.774 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:23.684 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:23.684 14:47:39 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:24.626 14:47:40 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:24.626 14:47:40 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:24.626 14:47:40 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.626 14:47:40 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:24.626 14:47:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.626 14:47:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.626 14:47:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.626 14:47:40 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.626 14:47:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.885 14:47:40 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.885 14:47:40 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:24.885 14:47:40 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.086 Waiting for block devices as requested 00:05:29.086 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:29.086 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:29.086 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:29.086 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:29.086 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:29.086 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:29.086 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:29.086 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:29.346 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:29.346 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:29.346 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:29.607 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:29.607 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:29.607 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:29.868 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:29.868 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:29.868 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:29.868 14:47:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:29.868 14:47:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:29.868 14:47:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:29.868 14:47:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:29.868 14:47:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:29.868 14:47:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:29.868 14:47:45 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:29.868 14:47:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:29.868 14:47:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:29.868 14:47:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:29.868 14:47:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:29.868 14:47:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:29.868 14:47:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:29.868 14:47:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:29.868 14:47:45 -- common/autotest_common.sh@1557 -- # continue 00:05:29.868 14:47:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:29.868 14:47:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.868 14:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.868 14:47:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:29.868 14:47:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.868 14:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:29.868 14:47:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:34.077 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:34.077 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:34.077 14:47:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:34.077 14:47:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.077 14:47:49 -- common/autotest_common.sh@10 -- # set +x 00:05:34.077 14:47:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:34.077 14:47:49 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:34.077 14:47:49 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.077 14:47:49 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:34.077 14:47:49 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:34.077 14:47:49 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:34.077 14:47:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:34.077 14:47:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:34.077 14:47:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.077 14:47:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.077 14:47:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:34.077 14:47:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:34.077 14:47:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:34.077 14:47:50 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:34.077 14:47:50 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:34.077 14:47:50 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:34.077 14:47:50 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:34.077 14:47:50 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:34.077 14:47:50 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:34.077 14:47:50 -- common/autotest_common.sh@1593 -- # return 0 00:05:34.077 14:47:50 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:34.077 14:47:50 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:34.077 14:47:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.077 14:47:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.077 14:47:50 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:34.077 14:47:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.077 14:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:34.077 14:47:50 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:34.077 14:47:50 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:34.077 14:47:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.077 14:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.077 14:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:34.077 ************************************ 00:05:34.077 START TEST env 00:05:34.077 ************************************ 00:05:34.077 14:47:50 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:34.338 * Looking for test storage... 00:05:34.339 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:34.339 14:47:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.339 14:47:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.339 14:47:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.339 14:47:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.339 ************************************ 00:05:34.339 START TEST env_memory 00:05:34.339 ************************************ 00:05:34.339 14:47:50 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.339 00:05:34.339 00:05:34.339 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.339 http://cunit.sourceforge.net/ 00:05:34.339 00:05:34.339 00:05:34.339 Suite: memory 00:05:34.339 Test: alloc and free memory map ...[2024-07-15 14:47:50.267994] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.339 passed 00:05:34.339 Test: mem map translation ...[2024-07-15 14:47:50.293623] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.339 [2024-07-15 14:47:50.293654] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.339 [2024-07-15 14:47:50.293702] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.339 [2024-07-15 14:47:50.293710] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.339 passed 00:05:34.339 Test: mem map registration ...[2024-07-15 14:47:50.349119] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:34.339 [2024-07-15 14:47:50.349142] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:34.339 passed 00:05:34.600 Test: mem map adjacent registrations ...passed 00:05:34.600 00:05:34.600 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.600 suites 1 1 n/a 0 0 00:05:34.600 tests 4 4 4 0 0 00:05:34.600 asserts 152 152 152 0 n/a 00:05:34.600 00:05:34.600 Elapsed time = 0.193 seconds 00:05:34.600 00:05:34.600 real 0m0.207s 00:05:34.600 user 0m0.195s 00:05:34.600 sys 0m0.010s 00:05:34.600 14:47:50 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.600 14:47:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:34.600 ************************************ 00:05:34.600 END TEST env_memory 00:05:34.600 ************************************ 00:05:34.600 14:47:50 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.600 14:47:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.600 14:47:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.600 14:47:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.600 14:47:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.600 ************************************ 00:05:34.600 START TEST env_vtophys 00:05:34.600 ************************************ 00:05:34.600 14:47:50 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.600 EAL: lib.eal log level changed from notice to debug 00:05:34.600 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.600 EAL: Detected lcore 1 as core 1 on socket 0 00:05:34.600 EAL: Detected lcore 2 as core 2 on socket 0 00:05:34.600 EAL: Detected lcore 3 as core 3 on socket 0 00:05:34.600 EAL: Detected lcore 4 as core 4 on socket 0 00:05:34.600 EAL: Detected lcore 5 as core 5 on socket 0 00:05:34.600 EAL: Detected lcore 6 as core 6 on socket 0 00:05:34.600 EAL: Detected lcore 7 as core 7 on socket 0 00:05:34.600 EAL: Detected lcore 8 as core 8 on socket 0 00:05:34.600 EAL: Detected lcore 9 as core 9 on socket 0 00:05:34.600 EAL: Detected lcore 10 as core 10 on socket 0 00:05:34.600 EAL: Detected lcore 11 as core 11 on socket 0 00:05:34.600 EAL: Detected lcore 12 as core 12 on socket 0 00:05:34.600 EAL: Detected lcore 13 as core 13 on socket 0 00:05:34.600 EAL: Detected lcore 14 as core 14 on socket 0 00:05:34.600 EAL: Detected lcore 15 as core 15 on socket 0 00:05:34.600 EAL: Detected lcore 16 as core 16 on socket 0 00:05:34.600 EAL: Detected lcore 17 as core 17 on socket 0 00:05:34.600 EAL: Detected lcore 18 as core 18 on socket 0 00:05:34.600 EAL: Detected lcore 19 as core 19 on socket 0 00:05:34.600 EAL: Detected lcore 20 as core 20 on socket 0 00:05:34.600 EAL: Detected lcore 21 as core 21 on socket 0 00:05:34.600 EAL: Detected lcore 22 as core 22 on socket 0 00:05:34.600 EAL: Detected lcore 23 as core 23 on socket 0 00:05:34.600 EAL: Detected lcore 24 as core 24 on socket 0 00:05:34.600 EAL: Detected lcore 25 as core 25 on socket 0 00:05:34.600 EAL: Detected lcore 26 as core 26 on socket 0 00:05:34.600 EAL: Detected lcore 27 as core 27 on socket 0 00:05:34.600 EAL: Detected lcore 28 as core 28 on socket 0 00:05:34.600 EAL: Detected lcore 29 as core 29 on socket 0 00:05:34.600 EAL: Detected lcore 30 as core 30 on socket 0 00:05:34.600 EAL: Detected lcore 31 as core 31 on socket 0 00:05:34.600 EAL: Detected lcore 32 as core 32 on socket 0 00:05:34.600 EAL: Detected lcore 33 as core 33 on socket 0 00:05:34.600 EAL: Detected lcore 34 as core 34 on socket 0 00:05:34.600 EAL: Detected lcore 35 as core 35 on socket 0 00:05:34.600 EAL: Detected lcore 36 as core 0 on socket 1 00:05:34.600 EAL: Detected lcore 37 as core 1 on socket 1 00:05:34.600 EAL: Detected lcore 38 as core 2 on socket 1 00:05:34.600 EAL: Detected lcore 39 as core 3 on socket 1 00:05:34.600 EAL: Detected lcore 40 as core 4 on socket 1 00:05:34.600 EAL: Detected lcore 41 as core 5 on socket 1 00:05:34.600 EAL: Detected lcore 42 as core 6 on socket 1 00:05:34.600 EAL: Detected lcore 43 as core 7 on socket 1 00:05:34.600 EAL: Detected lcore 44 as core 8 on socket 1 00:05:34.600 EAL: Detected lcore 45 as core 9 on socket 1 00:05:34.600 EAL: Detected lcore 46 as core 10 on socket 1 00:05:34.600 EAL: Detected lcore 47 as core 11 on socket 1 00:05:34.600 EAL: Detected lcore 48 as core 12 on socket 1 00:05:34.600 EAL: Detected lcore 49 as core 13 on socket 1 00:05:34.600 EAL: Detected lcore 50 as core 14 on socket 1 00:05:34.600 EAL: Detected lcore 51 as core 15 on socket 1 00:05:34.600 EAL: Detected lcore 52 as core 16 on socket 1 00:05:34.600 EAL: Detected lcore 53 as core 17 on socket 1 00:05:34.600 EAL: Detected lcore 54 as core 18 on socket 1 00:05:34.600 EAL: Detected lcore 55 as core 19 on socket 1 00:05:34.600 EAL: Detected lcore 56 as core 20 on socket 1 00:05:34.600 EAL: Detected lcore 57 as core 21 on socket 1 00:05:34.600 EAL: Detected lcore 58 as core 22 on socket 1 00:05:34.600 EAL: Detected lcore 59 as core 23 on socket 1 00:05:34.600 EAL: Detected lcore 60 as core 24 on socket 1 00:05:34.600 EAL: Detected lcore 61 as core 25 on socket 1 00:05:34.600 EAL: Detected lcore 62 as core 26 on socket 1 00:05:34.600 EAL: Detected lcore 63 as core 27 on socket 1 00:05:34.600 EAL: Detected lcore 64 as core 28 on socket 1 00:05:34.600 EAL: Detected lcore 65 as core 29 on socket 1 00:05:34.600 EAL: Detected lcore 66 as core 30 on socket 1 00:05:34.600 EAL: Detected lcore 67 as core 31 on socket 1 00:05:34.600 EAL: Detected lcore 68 as core 32 on socket 1 00:05:34.600 EAL: Detected lcore 69 as core 33 on socket 1 00:05:34.600 EAL: Detected lcore 70 as core 34 on socket 1 00:05:34.600 EAL: Detected lcore 71 as core 35 on socket 1 00:05:34.600 EAL: Detected lcore 72 as core 0 on socket 0 00:05:34.600 EAL: Detected lcore 73 as core 1 on socket 0 00:05:34.600 EAL: Detected lcore 74 as core 2 on socket 0 00:05:34.600 EAL: Detected lcore 75 as core 3 on socket 0 00:05:34.600 EAL: Detected lcore 76 as core 4 on socket 0 00:05:34.600 EAL: Detected lcore 77 as core 5 on socket 0 00:05:34.600 EAL: Detected lcore 78 as core 6 on socket 0 00:05:34.600 EAL: Detected lcore 79 as core 7 on socket 0 00:05:34.600 EAL: Detected lcore 80 as core 8 on socket 0 00:05:34.600 EAL: Detected lcore 81 as core 9 on socket 0 00:05:34.600 EAL: Detected lcore 82 as core 10 on socket 0 00:05:34.600 EAL: Detected lcore 83 as core 11 on socket 0 00:05:34.600 EAL: Detected lcore 84 as core 12 on socket 0 00:05:34.600 EAL: Detected lcore 85 as core 13 on socket 0 00:05:34.600 EAL: Detected lcore 86 as core 14 on socket 0 00:05:34.600 EAL: Detected lcore 87 as core 15 on socket 0 00:05:34.600 EAL: Detected lcore 88 as core 16 on socket 0 00:05:34.600 EAL: Detected lcore 89 as core 17 on socket 0 00:05:34.600 EAL: Detected lcore 90 as core 18 on socket 0 00:05:34.600 EAL: Detected lcore 91 as core 19 on socket 0 00:05:34.600 EAL: Detected lcore 92 as core 20 on socket 0 00:05:34.600 EAL: Detected lcore 93 as core 21 on socket 0 00:05:34.600 EAL: Detected lcore 94 as core 22 on socket 0 00:05:34.601 EAL: Detected lcore 95 as core 23 on socket 0 00:05:34.601 EAL: Detected lcore 96 as core 24 on socket 0 00:05:34.601 EAL: Detected lcore 97 as core 25 on socket 0 00:05:34.601 EAL: Detected lcore 98 as core 26 on socket 0 00:05:34.601 EAL: Detected lcore 99 as core 27 on socket 0 00:05:34.601 EAL: Detected lcore 100 as core 28 on socket 0 00:05:34.601 EAL: Detected lcore 101 as core 29 on socket 0 00:05:34.601 EAL: Detected lcore 102 as core 30 on socket 0 00:05:34.601 EAL: Detected lcore 103 as core 31 on socket 0 00:05:34.601 EAL: Detected lcore 104 as core 32 on socket 0 00:05:34.601 EAL: Detected lcore 105 as core 33 on socket 0 00:05:34.601 EAL: Detected lcore 106 as core 34 on socket 0 00:05:34.601 EAL: Detected lcore 107 as core 35 on socket 0 00:05:34.601 EAL: Detected lcore 108 as core 0 on socket 1 00:05:34.601 EAL: Detected lcore 109 as core 1 on socket 1 00:05:34.601 EAL: Detected lcore 110 as core 2 on socket 1 00:05:34.601 EAL: Detected lcore 111 as core 3 on socket 1 00:05:34.601 EAL: Detected lcore 112 as core 4 on socket 1 00:05:34.601 EAL: Detected lcore 113 as core 5 on socket 1 00:05:34.601 EAL: Detected lcore 114 as core 6 on socket 1 00:05:34.601 EAL: Detected lcore 115 as core 7 on socket 1 00:05:34.601 EAL: Detected lcore 116 as core 8 on socket 1 00:05:34.601 EAL: Detected lcore 117 as core 9 on socket 1 00:05:34.601 EAL: Detected lcore 118 as core 10 on socket 1 00:05:34.601 EAL: Detected lcore 119 as core 11 on socket 1 00:05:34.601 EAL: Detected lcore 120 as core 12 on socket 1 00:05:34.601 EAL: Detected lcore 121 as core 13 on socket 1 00:05:34.601 EAL: Detected lcore 122 as core 14 on socket 1 00:05:34.601 EAL: Detected lcore 123 as core 15 on socket 1 00:05:34.601 EAL: Detected lcore 124 as core 16 on socket 1 00:05:34.601 EAL: Detected lcore 125 as core 17 on socket 1 00:05:34.601 EAL: Detected lcore 126 as core 18 on socket 1 00:05:34.601 EAL: Detected lcore 127 as core 19 on socket 1 00:05:34.601 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:34.601 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:34.601 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:34.601 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:34.601 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:34.601 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:34.601 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:34.601 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:34.601 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:34.601 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:34.601 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:34.601 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:34.601 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:34.601 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:34.601 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:34.601 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:34.601 EAL: Maximum logical cores by configuration: 128 00:05:34.601 EAL: Detected CPU lcores: 128 00:05:34.601 EAL: Detected NUMA nodes: 2 00:05:34.601 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:34.601 EAL: Detected shared linkage of DPDK 00:05:34.601 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.601 EAL: Bus pci wants IOVA as 'DC' 00:05:34.601 EAL: Buses did not request a specific IOVA mode. 00:05:34.601 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:34.601 EAL: Selected IOVA mode 'VA' 00:05:34.601 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.601 EAL: Probing VFIO support... 00:05:34.601 EAL: IOMMU type 1 (Type 1) is supported 00:05:34.601 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:34.601 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:34.601 EAL: VFIO support initialized 00:05:34.601 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.601 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.601 EAL: Setting up physically contiguous memory... 00:05:34.601 EAL: Setting maximum number of open files to 524288 00:05:34.601 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.601 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:34.601 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.601 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:34.601 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.601 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:34.601 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.601 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.601 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:34.601 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:34.601 EAL: Hugepages will be freed exactly as allocated. 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: TSC frequency is ~2400000 KHz 00:05:34.601 EAL: Main lcore 0 is ready (tid=7fc0289f4a00;cpuset=[0]) 00:05:34.601 EAL: Trying to obtain current memory policy. 00:05:34.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.601 EAL: Restoring previous memory policy: 0 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:34.601 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.601 00:05:34.601 00:05:34.601 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.601 http://cunit.sourceforge.net/ 00:05:34.601 00:05:34.601 00:05:34.601 Suite: components_suite 00:05:34.601 Test: vtophys_malloc_test ...passed 00:05:34.601 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:34.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.601 EAL: Restoring previous memory policy: 4 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was expanded by 4MB 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was shrunk by 4MB 00:05:34.601 EAL: Trying to obtain current memory policy. 00:05:34.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.601 EAL: Restoring previous memory policy: 4 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was expanded by 6MB 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was shrunk by 6MB 00:05:34.601 EAL: Trying to obtain current memory policy. 00:05:34.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.601 EAL: Restoring previous memory policy: 4 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was expanded by 10MB 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was shrunk by 10MB 00:05:34.601 EAL: Trying to obtain current memory policy. 00:05:34.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.601 EAL: Restoring previous memory policy: 4 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was expanded by 18MB 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was shrunk by 18MB 00:05:34.601 EAL: Trying to obtain current memory policy. 00:05:34.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.601 EAL: Restoring previous memory policy: 4 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was expanded by 34MB 00:05:34.601 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.601 EAL: request: mp_malloc_sync 00:05:34.601 EAL: No shared files mode enabled, IPC is disabled 00:05:34.601 EAL: Heap on socket 0 was shrunk by 34MB 00:05:34.601 EAL: Trying to obtain current memory policy. 00:05:34.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.601 EAL: Restoring previous memory policy: 4 00:05:34.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.602 EAL: request: mp_malloc_sync 00:05:34.602 EAL: No shared files mode enabled, IPC is disabled 00:05:34.602 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.602 EAL: request: mp_malloc_sync 00:05:34.602 EAL: No shared files mode enabled, IPC is disabled 00:05:34.602 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.602 EAL: Trying to obtain current memory policy. 00:05:34.602 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.602 EAL: Restoring previous memory policy: 4 00:05:34.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.602 EAL: request: mp_malloc_sync 00:05:34.602 EAL: No shared files mode enabled, IPC is disabled 00:05:34.602 EAL: Heap on socket 0 was expanded by 130MB 00:05:34.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.862 EAL: request: mp_malloc_sync 00:05:34.862 EAL: No shared files mode enabled, IPC is disabled 00:05:34.862 EAL: Heap on socket 0 was shrunk by 130MB 00:05:34.862 EAL: Trying to obtain current memory policy. 00:05:34.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.862 EAL: Restoring previous memory policy: 4 00:05:34.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.862 EAL: request: mp_malloc_sync 00:05:34.862 EAL: No shared files mode enabled, IPC is disabled 00:05:34.862 EAL: Heap on socket 0 was expanded by 258MB 00:05:34.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.862 EAL: request: mp_malloc_sync 00:05:34.862 EAL: No shared files mode enabled, IPC is disabled 00:05:34.862 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.862 EAL: Trying to obtain current memory policy. 00:05:34.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.862 EAL: Restoring previous memory policy: 4 00:05:34.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.862 EAL: request: mp_malloc_sync 00:05:34.862 EAL: No shared files mode enabled, IPC is disabled 00:05:34.862 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.862 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.122 EAL: request: mp_malloc_sync 00:05:35.122 EAL: No shared files mode enabled, IPC is disabled 00:05:35.122 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.122 EAL: Trying to obtain current memory policy. 00:05:35.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.122 EAL: Restoring previous memory policy: 4 00:05:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.122 EAL: request: mp_malloc_sync 00:05:35.122 EAL: No shared files mode enabled, IPC is disabled 00:05:35.122 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.383 EAL: request: mp_malloc_sync 00:05:35.383 EAL: No shared files mode enabled, IPC is disabled 00:05:35.383 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.383 passed 00:05:35.383 00:05:35.383 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.383 suites 1 1 n/a 0 0 00:05:35.383 tests 2 2 2 0 0 00:05:35.383 asserts 497 497 497 0 n/a 00:05:35.383 00:05:35.383 Elapsed time = 0.659 seconds 00:05:35.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.383 EAL: request: mp_malloc_sync 00:05:35.383 EAL: No shared files mode enabled, IPC is disabled 00:05:35.383 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.383 EAL: No shared files mode enabled, IPC is disabled 00:05:35.383 EAL: No shared files mode enabled, IPC is disabled 00:05:35.383 EAL: No shared files mode enabled, IPC is disabled 00:05:35.383 00:05:35.383 real 0m0.784s 00:05:35.383 user 0m0.420s 00:05:35.383 sys 0m0.339s 00:05:35.383 14:47:51 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.383 14:47:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:35.383 ************************************ 00:05:35.383 END TEST env_vtophys 00:05:35.383 ************************************ 00:05:35.383 14:47:51 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.383 14:47:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.383 14:47:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.383 14:47:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.383 14:47:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.383 ************************************ 00:05:35.383 START TEST env_pci 00:05:35.383 ************************************ 00:05:35.383 14:47:51 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.383 00:05:35.383 00:05:35.383 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.383 http://cunit.sourceforge.net/ 00:05:35.383 00:05:35.383 00:05:35.383 Suite: pci 00:05:35.383 Test: pci_hook ...[2024-07-15 14:47:51.363923] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1612694 has claimed it 00:05:35.383 EAL: Cannot find device (10000:00:01.0) 00:05:35.383 EAL: Failed to attach device on primary process 00:05:35.383 passed 00:05:35.383 00:05:35.383 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.383 suites 1 1 n/a 0 0 00:05:35.383 tests 1 1 1 0 0 00:05:35.383 asserts 25 25 25 0 n/a 00:05:35.383 00:05:35.383 Elapsed time = 0.040 seconds 00:05:35.383 00:05:35.383 real 0m0.060s 00:05:35.383 user 0m0.020s 00:05:35.383 sys 0m0.039s 00:05:35.383 14:47:51 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.383 14:47:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:35.383 ************************************ 00:05:35.383 END TEST env_pci 00:05:35.383 ************************************ 00:05:35.383 14:47:51 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.383 14:47:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.383 14:47:51 env -- env/env.sh@15 -- # uname 00:05:35.643 14:47:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.643 14:47:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.643 14:47:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.643 14:47:51 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:35.643 14:47:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.643 14:47:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.643 ************************************ 00:05:35.643 START TEST env_dpdk_post_init 00:05:35.643 ************************************ 00:05:35.643 14:47:51 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.643 EAL: Detected CPU lcores: 128 00:05:35.643 EAL: Detected NUMA nodes: 2 00:05:35.643 EAL: Detected shared linkage of DPDK 00:05:35.643 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.643 EAL: Selected IOVA mode 'VA' 00:05:35.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.643 EAL: VFIO support initialized 00:05:35.643 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.643 EAL: Using IOMMU type 1 (Type 1) 00:05:35.902 EAL: Ignore mapping IO port bar(1) 00:05:35.902 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:35.902 EAL: Ignore mapping IO port bar(1) 00:05:36.162 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:36.162 EAL: Ignore mapping IO port bar(1) 00:05:36.423 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:36.423 EAL: Ignore mapping IO port bar(1) 00:05:36.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:36.683 EAL: Ignore mapping IO port bar(1) 00:05:36.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:36.943 EAL: Ignore mapping IO port bar(1) 00:05:36.943 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:37.203 EAL: Ignore mapping IO port bar(1) 00:05:37.203 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:37.463 EAL: Ignore mapping IO port bar(1) 00:05:37.463 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:37.722 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:37.722 EAL: Ignore mapping IO port bar(1) 00:05:37.981 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:37.981 EAL: Ignore mapping IO port bar(1) 00:05:38.241 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:38.241 EAL: Ignore mapping IO port bar(1) 00:05:38.241 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:38.501 EAL: Ignore mapping IO port bar(1) 00:05:38.501 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:38.761 EAL: Ignore mapping IO port bar(1) 00:05:38.761 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:39.022 EAL: Ignore mapping IO port bar(1) 00:05:39.022 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:39.282 EAL: Ignore mapping IO port bar(1) 00:05:39.282 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:39.282 EAL: Ignore mapping IO port bar(1) 00:05:39.543 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:39.543 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:39.543 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:39.543 Starting DPDK initialization... 00:05:39.543 Starting SPDK post initialization... 00:05:39.543 SPDK NVMe probe 00:05:39.543 Attaching to 0000:65:00.0 00:05:39.543 Attached to 0000:65:00.0 00:05:39.543 Cleaning up... 00:05:41.456 00:05:41.456 real 0m5.725s 00:05:41.456 user 0m0.185s 00:05:41.456 sys 0m0.088s 00:05:41.456 14:47:57 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.456 14:47:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.456 ************************************ 00:05:41.456 END TEST env_dpdk_post_init 00:05:41.456 ************************************ 00:05:41.456 14:47:57 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.456 14:47:57 env -- env/env.sh@26 -- # uname 00:05:41.456 14:47:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.456 14:47:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.456 14:47:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.456 14:47:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.456 14:47:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.456 ************************************ 00:05:41.456 START TEST env_mem_callbacks 00:05:41.456 ************************************ 00:05:41.456 14:47:57 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.456 EAL: Detected CPU lcores: 128 00:05:41.456 EAL: Detected NUMA nodes: 2 00:05:41.456 EAL: Detected shared linkage of DPDK 00:05:41.456 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.456 EAL: Selected IOVA mode 'VA' 00:05:41.456 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.456 EAL: VFIO support initialized 00:05:41.456 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.456 00:05:41.456 00:05:41.456 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.456 http://cunit.sourceforge.net/ 00:05:41.456 00:05:41.456 00:05:41.456 Suite: memory 00:05:41.456 Test: test ... 00:05:41.456 register 0x200000200000 2097152 00:05:41.456 malloc 3145728 00:05:41.456 register 0x200000400000 4194304 00:05:41.456 buf 0x200000500000 len 3145728 PASSED 00:05:41.456 malloc 64 00:05:41.456 buf 0x2000004fff40 len 64 PASSED 00:05:41.456 malloc 4194304 00:05:41.456 register 0x200000800000 6291456 00:05:41.456 buf 0x200000a00000 len 4194304 PASSED 00:05:41.456 free 0x200000500000 3145728 00:05:41.456 free 0x2000004fff40 64 00:05:41.456 unregister 0x200000400000 4194304 PASSED 00:05:41.456 free 0x200000a00000 4194304 00:05:41.456 unregister 0x200000800000 6291456 PASSED 00:05:41.456 malloc 8388608 00:05:41.456 register 0x200000400000 10485760 00:05:41.456 buf 0x200000600000 len 8388608 PASSED 00:05:41.456 free 0x200000600000 8388608 00:05:41.456 unregister 0x200000400000 10485760 PASSED 00:05:41.456 passed 00:05:41.456 00:05:41.456 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.456 suites 1 1 n/a 0 0 00:05:41.456 tests 1 1 1 0 0 00:05:41.456 asserts 15 15 15 0 n/a 00:05:41.456 00:05:41.456 Elapsed time = 0.008 seconds 00:05:41.456 00:05:41.456 real 0m0.077s 00:05:41.456 user 0m0.028s 00:05:41.456 sys 0m0.050s 00:05:41.456 14:47:57 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.456 14:47:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.456 ************************************ 00:05:41.456 END TEST env_mem_callbacks 00:05:41.456 ************************************ 00:05:41.456 14:47:57 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.456 00:05:41.456 real 0m7.304s 00:05:41.456 user 0m0.996s 00:05:41.456 sys 0m0.852s 00:05:41.456 14:47:57 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.456 14:47:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.456 ************************************ 00:05:41.456 END TEST env 00:05:41.456 ************************************ 00:05:41.456 14:47:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.456 14:47:57 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.456 14:47:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.456 14:47:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.456 14:47:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.456 ************************************ 00:05:41.456 START TEST rpc 00:05:41.456 ************************************ 00:05:41.456 14:47:57 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.717 * Looking for test storage... 00:05:41.717 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:41.717 14:47:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1614139 00:05:41.717 14:47:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.717 14:47:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.717 14:47:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1614139 00:05:41.717 14:47:57 rpc -- common/autotest_common.sh@829 -- # '[' -z 1614139 ']' 00:05:41.717 14:47:57 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.717 14:47:57 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.717 14:47:57 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.717 14:47:57 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.717 14:47:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.717 [2024-07-15 14:47:57.630346] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:41.717 [2024-07-15 14:47:57.630401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614139 ] 00:05:41.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.717 [2024-07-15 14:47:57.702826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.717 [2024-07-15 14:47:57.776642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.717 [2024-07-15 14:47:57.776682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1614139' to capture a snapshot of events at runtime. 00:05:41.717 [2024-07-15 14:47:57.776690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.717 [2024-07-15 14:47:57.776696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.717 [2024-07-15 14:47:57.776701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1614139 for offline analysis/debug. 00:05:41.717 [2024-07-15 14:47:57.776722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.349 14:47:58 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.349 14:47:58 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.349 14:47:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:42.349 14:47:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:42.349 14:47:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.349 14:47:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.349 14:47:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.349 14:47:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.349 14:47:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.626 ************************************ 00:05:42.626 START TEST rpc_integrity 00:05:42.626 ************************************ 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.626 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.626 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.626 { 00:05:42.626 "name": "Malloc0", 00:05:42.626 "aliases": [ 00:05:42.627 "1847d88c-4b43-4cf6-9d67-1048ebbdfc53" 00:05:42.627 ], 00:05:42.627 "product_name": "Malloc disk", 00:05:42.627 "block_size": 512, 00:05:42.627 "num_blocks": 16384, 00:05:42.627 "uuid": "1847d88c-4b43-4cf6-9d67-1048ebbdfc53", 00:05:42.627 "assigned_rate_limits": { 00:05:42.627 "rw_ios_per_sec": 0, 00:05:42.627 "rw_mbytes_per_sec": 0, 00:05:42.627 "r_mbytes_per_sec": 0, 00:05:42.627 "w_mbytes_per_sec": 0 00:05:42.627 }, 00:05:42.627 "claimed": false, 00:05:42.627 "zoned": false, 00:05:42.627 "supported_io_types": { 00:05:42.627 "read": true, 00:05:42.627 "write": true, 00:05:42.627 "unmap": true, 00:05:42.627 "flush": true, 00:05:42.627 "reset": true, 00:05:42.627 "nvme_admin": false, 00:05:42.627 "nvme_io": false, 00:05:42.627 "nvme_io_md": false, 00:05:42.627 "write_zeroes": true, 00:05:42.627 "zcopy": true, 00:05:42.627 "get_zone_info": false, 00:05:42.627 "zone_management": false, 00:05:42.627 "zone_append": false, 00:05:42.627 "compare": false, 00:05:42.627 "compare_and_write": false, 00:05:42.627 "abort": true, 00:05:42.627 "seek_hole": false, 00:05:42.627 "seek_data": false, 00:05:42.627 "copy": true, 00:05:42.627 "nvme_iov_md": false 00:05:42.627 }, 00:05:42.627 "memory_domains": [ 00:05:42.627 { 00:05:42.627 "dma_device_id": "system", 00:05:42.627 "dma_device_type": 1 00:05:42.627 }, 00:05:42.627 { 00:05:42.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.627 "dma_device_type": 2 00:05:42.627 } 00:05:42.627 ], 00:05:42.627 "driver_specific": {} 00:05:42.627 } 00:05:42.627 ]' 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.627 [2024-07-15 14:47:58.570425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.627 [2024-07-15 14:47:58.570459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.627 [2024-07-15 14:47:58.570472] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21f9490 00:05:42.627 [2024-07-15 14:47:58.570479] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.627 [2024-07-15 14:47:58.571856] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.627 [2024-07-15 14:47:58.571877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.627 Passthru0 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.627 { 00:05:42.627 "name": "Malloc0", 00:05:42.627 "aliases": [ 00:05:42.627 "1847d88c-4b43-4cf6-9d67-1048ebbdfc53" 00:05:42.627 ], 00:05:42.627 "product_name": "Malloc disk", 00:05:42.627 "block_size": 512, 00:05:42.627 "num_blocks": 16384, 00:05:42.627 "uuid": "1847d88c-4b43-4cf6-9d67-1048ebbdfc53", 00:05:42.627 "assigned_rate_limits": { 00:05:42.627 "rw_ios_per_sec": 0, 00:05:42.627 "rw_mbytes_per_sec": 0, 00:05:42.627 "r_mbytes_per_sec": 0, 00:05:42.627 "w_mbytes_per_sec": 0 00:05:42.627 }, 00:05:42.627 "claimed": true, 00:05:42.627 "claim_type": "exclusive_write", 00:05:42.627 "zoned": false, 00:05:42.627 "supported_io_types": { 00:05:42.627 "read": true, 00:05:42.627 "write": true, 00:05:42.627 "unmap": true, 00:05:42.627 "flush": true, 00:05:42.627 "reset": true, 00:05:42.627 "nvme_admin": false, 00:05:42.627 "nvme_io": false, 00:05:42.627 "nvme_io_md": false, 00:05:42.627 "write_zeroes": true, 00:05:42.627 "zcopy": true, 00:05:42.627 "get_zone_info": false, 00:05:42.627 "zone_management": false, 00:05:42.627 "zone_append": false, 00:05:42.627 "compare": false, 00:05:42.627 "compare_and_write": false, 00:05:42.627 "abort": true, 00:05:42.627 "seek_hole": false, 00:05:42.627 "seek_data": false, 00:05:42.627 "copy": true, 00:05:42.627 "nvme_iov_md": false 00:05:42.627 }, 00:05:42.627 "memory_domains": [ 00:05:42.627 { 00:05:42.627 "dma_device_id": "system", 00:05:42.627 "dma_device_type": 1 00:05:42.627 }, 00:05:42.627 { 00:05:42.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.627 "dma_device_type": 2 00:05:42.627 } 00:05:42.627 ], 00:05:42.627 "driver_specific": {} 00:05:42.627 }, 00:05:42.627 { 00:05:42.627 "name": "Passthru0", 00:05:42.627 "aliases": [ 00:05:42.627 "d4223cba-abf7-50ff-ae89-9c4826794afd" 00:05:42.627 ], 00:05:42.627 "product_name": "passthru", 00:05:42.627 "block_size": 512, 00:05:42.627 "num_blocks": 16384, 00:05:42.627 "uuid": "d4223cba-abf7-50ff-ae89-9c4826794afd", 00:05:42.627 "assigned_rate_limits": { 00:05:42.627 "rw_ios_per_sec": 0, 00:05:42.627 "rw_mbytes_per_sec": 0, 00:05:42.627 "r_mbytes_per_sec": 0, 00:05:42.627 "w_mbytes_per_sec": 0 00:05:42.627 }, 00:05:42.627 "claimed": false, 00:05:42.627 "zoned": false, 00:05:42.627 "supported_io_types": { 00:05:42.627 "read": true, 00:05:42.627 "write": true, 00:05:42.627 "unmap": true, 00:05:42.627 "flush": true, 00:05:42.627 "reset": true, 00:05:42.627 "nvme_admin": false, 00:05:42.627 "nvme_io": false, 00:05:42.627 "nvme_io_md": false, 00:05:42.627 "write_zeroes": true, 00:05:42.627 "zcopy": true, 00:05:42.627 "get_zone_info": false, 00:05:42.627 "zone_management": false, 00:05:42.627 "zone_append": false, 00:05:42.627 "compare": false, 00:05:42.627 "compare_and_write": false, 00:05:42.627 "abort": true, 00:05:42.627 "seek_hole": false, 00:05:42.627 "seek_data": false, 00:05:42.627 "copy": true, 00:05:42.627 "nvme_iov_md": false 00:05:42.627 }, 00:05:42.627 "memory_domains": [ 00:05:42.627 { 00:05:42.627 "dma_device_id": "system", 00:05:42.627 "dma_device_type": 1 00:05:42.627 }, 00:05:42.627 { 00:05:42.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.627 "dma_device_type": 2 00:05:42.627 } 00:05:42.627 ], 00:05:42.627 "driver_specific": { 00:05:42.627 "passthru": { 00:05:42.627 "name": "Passthru0", 00:05:42.627 "base_bdev_name": "Malloc0" 00:05:42.627 } 00:05:42.627 } 00:05:42.627 } 00:05:42.627 ]' 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.627 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.627 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.889 14:47:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.889 00:05:42.889 real 0m0.290s 00:05:42.889 user 0m0.195s 00:05:42.889 sys 0m0.031s 00:05:42.889 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.889 14:47:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.889 ************************************ 00:05:42.889 END TEST rpc_integrity 00:05:42.889 ************************************ 00:05:42.889 14:47:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.889 14:47:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.889 14:47:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.889 14:47:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.889 14:47:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.889 ************************************ 00:05:42.889 START TEST rpc_plugins 00:05:42.889 ************************************ 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:42.889 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.889 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.889 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.889 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.889 { 00:05:42.889 "name": "Malloc1", 00:05:42.889 "aliases": [ 00:05:42.889 "97114cba-047a-41ae-8215-58899afcabe5" 00:05:42.889 ], 00:05:42.889 "product_name": "Malloc disk", 00:05:42.889 "block_size": 4096, 00:05:42.889 "num_blocks": 256, 00:05:42.889 "uuid": "97114cba-047a-41ae-8215-58899afcabe5", 00:05:42.889 "assigned_rate_limits": { 00:05:42.889 "rw_ios_per_sec": 0, 00:05:42.889 "rw_mbytes_per_sec": 0, 00:05:42.889 "r_mbytes_per_sec": 0, 00:05:42.889 "w_mbytes_per_sec": 0 00:05:42.889 }, 00:05:42.889 "claimed": false, 00:05:42.889 "zoned": false, 00:05:42.889 "supported_io_types": { 00:05:42.889 "read": true, 00:05:42.889 "write": true, 00:05:42.889 "unmap": true, 00:05:42.889 "flush": true, 00:05:42.889 "reset": true, 00:05:42.889 "nvme_admin": false, 00:05:42.889 "nvme_io": false, 00:05:42.889 "nvme_io_md": false, 00:05:42.889 "write_zeroes": true, 00:05:42.889 "zcopy": true, 00:05:42.889 "get_zone_info": false, 00:05:42.889 "zone_management": false, 00:05:42.889 "zone_append": false, 00:05:42.889 "compare": false, 00:05:42.889 "compare_and_write": false, 00:05:42.889 "abort": true, 00:05:42.889 "seek_hole": false, 00:05:42.889 "seek_data": false, 00:05:42.889 "copy": true, 00:05:42.889 "nvme_iov_md": false 00:05:42.889 }, 00:05:42.889 "memory_domains": [ 00:05:42.889 { 00:05:42.889 "dma_device_id": "system", 00:05:42.889 "dma_device_type": 1 00:05:42.889 }, 00:05:42.889 { 00:05:42.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.889 "dma_device_type": 2 00:05:42.889 } 00:05:42.889 ], 00:05:42.889 "driver_specific": {} 00:05:42.889 } 00:05:42.889 ]' 00:05:42.889 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.889 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.889 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.889 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.890 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.890 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.890 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.890 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.890 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.890 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.890 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.890 14:47:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.890 00:05:42.890 real 0m0.150s 00:05:42.890 user 0m0.090s 00:05:42.890 sys 0m0.024s 00:05:42.890 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.890 14:47:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.890 ************************************ 00:05:42.890 END TEST rpc_plugins 00:05:42.890 ************************************ 00:05:43.151 14:47:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.151 14:47:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.151 14:47:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.151 14:47:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.151 14:47:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.151 ************************************ 00:05:43.151 START TEST rpc_trace_cmd_test 00:05:43.151 ************************************ 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:43.151 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1614139", 00:05:43.151 "tpoint_group_mask": "0x8", 00:05:43.151 "iscsi_conn": { 00:05:43.151 "mask": "0x2", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "scsi": { 00:05:43.151 "mask": "0x4", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "bdev": { 00:05:43.151 "mask": "0x8", 00:05:43.151 "tpoint_mask": "0xffffffffffffffff" 00:05:43.151 }, 00:05:43.151 "nvmf_rdma": { 00:05:43.151 "mask": "0x10", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "nvmf_tcp": { 00:05:43.151 "mask": "0x20", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "ftl": { 00:05:43.151 "mask": "0x40", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "blobfs": { 00:05:43.151 "mask": "0x80", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "dsa": { 00:05:43.151 "mask": "0x200", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "thread": { 00:05:43.151 "mask": "0x400", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "nvme_pcie": { 00:05:43.151 "mask": "0x800", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "iaa": { 00:05:43.151 "mask": "0x1000", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "nvme_tcp": { 00:05:43.151 "mask": "0x2000", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "bdev_nvme": { 00:05:43.151 "mask": "0x4000", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 }, 00:05:43.151 "sock": { 00:05:43.151 "mask": "0x8000", 00:05:43.151 "tpoint_mask": "0x0" 00:05:43.151 } 00:05:43.151 }' 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:43.151 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:43.152 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:43.152 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:43.152 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:43.152 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:43.152 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:43.152 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:43.152 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:43.413 14:47:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:43.413 00:05:43.413 real 0m0.208s 00:05:43.413 user 0m0.180s 00:05:43.413 sys 0m0.020s 00:05:43.413 14:47:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.413 14:47:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.413 ************************************ 00:05:43.413 END TEST rpc_trace_cmd_test 00:05:43.413 ************************************ 00:05:43.413 14:47:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.413 14:47:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:43.413 14:47:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:43.413 14:47:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:43.413 14:47:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.413 14:47:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.413 14:47:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.413 ************************************ 00:05:43.413 START TEST rpc_daemon_integrity 00:05:43.413 ************************************ 00:05:43.413 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.414 { 00:05:43.414 "name": "Malloc2", 00:05:43.414 "aliases": [ 00:05:43.414 "5d2da2f2-e746-4e6c-bb95-b1d4d92a15d1" 00:05:43.414 ], 00:05:43.414 "product_name": "Malloc disk", 00:05:43.414 "block_size": 512, 00:05:43.414 "num_blocks": 16384, 00:05:43.414 "uuid": "5d2da2f2-e746-4e6c-bb95-b1d4d92a15d1", 00:05:43.414 "assigned_rate_limits": { 00:05:43.414 "rw_ios_per_sec": 0, 00:05:43.414 "rw_mbytes_per_sec": 0, 00:05:43.414 "r_mbytes_per_sec": 0, 00:05:43.414 "w_mbytes_per_sec": 0 00:05:43.414 }, 00:05:43.414 "claimed": false, 00:05:43.414 "zoned": false, 00:05:43.414 "supported_io_types": { 00:05:43.414 "read": true, 00:05:43.414 "write": true, 00:05:43.414 "unmap": true, 00:05:43.414 "flush": true, 00:05:43.414 "reset": true, 00:05:43.414 "nvme_admin": false, 00:05:43.414 "nvme_io": false, 00:05:43.414 "nvme_io_md": false, 00:05:43.414 "write_zeroes": true, 00:05:43.414 "zcopy": true, 00:05:43.414 "get_zone_info": false, 00:05:43.414 "zone_management": false, 00:05:43.414 "zone_append": false, 00:05:43.414 "compare": false, 00:05:43.414 "compare_and_write": false, 00:05:43.414 "abort": true, 00:05:43.414 "seek_hole": false, 00:05:43.414 "seek_data": false, 00:05:43.414 "copy": true, 00:05:43.414 "nvme_iov_md": false 00:05:43.414 }, 00:05:43.414 "memory_domains": [ 00:05:43.414 { 00:05:43.414 "dma_device_id": "system", 00:05:43.414 "dma_device_type": 1 00:05:43.414 }, 00:05:43.414 { 00:05:43.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.414 "dma_device_type": 2 00:05:43.414 } 00:05:43.414 ], 00:05:43.414 "driver_specific": {} 00:05:43.414 } 00:05:43.414 ]' 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.414 [2024-07-15 14:47:59.444817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:43.414 [2024-07-15 14:47:59.444845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.414 [2024-07-15 14:47:59.444859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x239cf70 00:05:43.414 [2024-07-15 14:47:59.444866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.414 [2024-07-15 14:47:59.446099] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.414 [2024-07-15 14:47:59.446118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.414 Passthru0 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.414 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.675 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.675 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.675 { 00:05:43.675 "name": "Malloc2", 00:05:43.675 "aliases": [ 00:05:43.675 "5d2da2f2-e746-4e6c-bb95-b1d4d92a15d1" 00:05:43.675 ], 00:05:43.675 "product_name": "Malloc disk", 00:05:43.675 "block_size": 512, 00:05:43.675 "num_blocks": 16384, 00:05:43.675 "uuid": "5d2da2f2-e746-4e6c-bb95-b1d4d92a15d1", 00:05:43.675 "assigned_rate_limits": { 00:05:43.675 "rw_ios_per_sec": 0, 00:05:43.675 "rw_mbytes_per_sec": 0, 00:05:43.675 "r_mbytes_per_sec": 0, 00:05:43.675 "w_mbytes_per_sec": 0 00:05:43.675 }, 00:05:43.675 "claimed": true, 00:05:43.675 "claim_type": "exclusive_write", 00:05:43.675 "zoned": false, 00:05:43.675 "supported_io_types": { 00:05:43.675 "read": true, 00:05:43.675 "write": true, 00:05:43.675 "unmap": true, 00:05:43.675 "flush": true, 00:05:43.675 "reset": true, 00:05:43.675 "nvme_admin": false, 00:05:43.675 "nvme_io": false, 00:05:43.675 "nvme_io_md": false, 00:05:43.675 "write_zeroes": true, 00:05:43.675 "zcopy": true, 00:05:43.675 "get_zone_info": false, 00:05:43.675 "zone_management": false, 00:05:43.675 "zone_append": false, 00:05:43.675 "compare": false, 00:05:43.675 "compare_and_write": false, 00:05:43.675 "abort": true, 00:05:43.675 "seek_hole": false, 00:05:43.675 "seek_data": false, 00:05:43.675 "copy": true, 00:05:43.675 "nvme_iov_md": false 00:05:43.675 }, 00:05:43.675 "memory_domains": [ 00:05:43.675 { 00:05:43.675 "dma_device_id": "system", 00:05:43.675 "dma_device_type": 1 00:05:43.675 }, 00:05:43.675 { 00:05:43.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.675 "dma_device_type": 2 00:05:43.675 } 00:05:43.675 ], 00:05:43.675 "driver_specific": {} 00:05:43.675 }, 00:05:43.675 { 00:05:43.675 "name": "Passthru0", 00:05:43.675 "aliases": [ 00:05:43.675 "b25ce60e-ab50-5032-89fe-987398a07e72" 00:05:43.675 ], 00:05:43.675 "product_name": "passthru", 00:05:43.675 "block_size": 512, 00:05:43.675 "num_blocks": 16384, 00:05:43.675 "uuid": "b25ce60e-ab50-5032-89fe-987398a07e72", 00:05:43.675 "assigned_rate_limits": { 00:05:43.675 "rw_ios_per_sec": 0, 00:05:43.675 "rw_mbytes_per_sec": 0, 00:05:43.675 "r_mbytes_per_sec": 0, 00:05:43.675 "w_mbytes_per_sec": 0 00:05:43.675 }, 00:05:43.675 "claimed": false, 00:05:43.675 "zoned": false, 00:05:43.675 "supported_io_types": { 00:05:43.675 "read": true, 00:05:43.675 "write": true, 00:05:43.675 "unmap": true, 00:05:43.675 "flush": true, 00:05:43.675 "reset": true, 00:05:43.675 "nvme_admin": false, 00:05:43.675 "nvme_io": false, 00:05:43.675 "nvme_io_md": false, 00:05:43.675 "write_zeroes": true, 00:05:43.675 "zcopy": true, 00:05:43.675 "get_zone_info": false, 00:05:43.675 "zone_management": false, 00:05:43.675 "zone_append": false, 00:05:43.675 "compare": false, 00:05:43.676 "compare_and_write": false, 00:05:43.676 "abort": true, 00:05:43.676 "seek_hole": false, 00:05:43.676 "seek_data": false, 00:05:43.676 "copy": true, 00:05:43.676 "nvme_iov_md": false 00:05:43.676 }, 00:05:43.676 "memory_domains": [ 00:05:43.676 { 00:05:43.676 "dma_device_id": "system", 00:05:43.676 "dma_device_type": 1 00:05:43.676 }, 00:05:43.676 { 00:05:43.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.676 "dma_device_type": 2 00:05:43.676 } 00:05:43.676 ], 00:05:43.676 "driver_specific": { 00:05:43.676 "passthru": { 00:05:43.676 "name": "Passthru0", 00:05:43.676 "base_bdev_name": "Malloc2" 00:05:43.676 } 00:05:43.676 } 00:05:43.676 } 00:05:43.676 ]' 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.676 00:05:43.676 real 0m0.296s 00:05:43.676 user 0m0.191s 00:05:43.676 sys 0m0.040s 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.676 14:47:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.676 ************************************ 00:05:43.676 END TEST rpc_daemon_integrity 00:05:43.676 ************************************ 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.676 14:47:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.676 14:47:59 rpc -- rpc/rpc.sh@84 -- # killprocess 1614139 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@948 -- # '[' -z 1614139 ']' 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@952 -- # kill -0 1614139 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1614139 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1614139' 00:05:43.676 killing process with pid 1614139 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@967 -- # kill 1614139 00:05:43.676 14:47:59 rpc -- common/autotest_common.sh@972 -- # wait 1614139 00:05:43.937 00:05:43.937 real 0m2.420s 00:05:43.937 user 0m3.162s 00:05:43.937 sys 0m0.691s 00:05:43.937 14:47:59 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.937 14:47:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.937 ************************************ 00:05:43.937 END TEST rpc 00:05:43.937 ************************************ 00:05:43.937 14:47:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.937 14:47:59 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.937 14:47:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.937 14:47:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.937 14:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.937 ************************************ 00:05:43.937 START TEST skip_rpc 00:05:43.937 ************************************ 00:05:43.937 14:47:59 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:44.199 * Looking for test storage... 00:05:44.199 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:44.199 14:48:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:44.199 14:48:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:44.199 14:48:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:44.199 14:48:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.199 14:48:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.199 14:48:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.199 ************************************ 00:05:44.199 START TEST skip_rpc 00:05:44.199 ************************************ 00:05:44.199 14:48:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:44.199 14:48:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1614770 00:05:44.199 14:48:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.199 14:48:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:44.199 14:48:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:44.199 [2024-07-15 14:48:00.171175] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:44.199 [2024-07-15 14:48:00.171249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614770 ] 00:05:44.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.199 [2024-07-15 14:48:00.244935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.460 [2024-07-15 14:48:00.318832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1614770 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1614770 ']' 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1614770 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1614770 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1614770' 00:05:49.744 killing process with pid 1614770 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1614770 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1614770 00:05:49.744 00:05:49.744 real 0m5.279s 00:05:49.744 user 0m5.067s 00:05:49.744 sys 0m0.238s 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.744 14:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 ************************************ 00:05:49.744 END TEST skip_rpc 00:05:49.744 ************************************ 00:05:49.744 14:48:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.744 14:48:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.744 14:48:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.744 14:48:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.744 14:48:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 ************************************ 00:05:49.744 START TEST skip_rpc_with_json 00:05:49.744 ************************************ 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1616071 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1616071 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1616071 ']' 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.744 14:48:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.744 [2024-07-15 14:48:05.524435] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:49.744 [2024-07-15 14:48:05.524487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616071 ] 00:05:49.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.744 [2024-07-15 14:48:05.594753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.744 [2024-07-15 14:48:05.664112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.371 [2024-07-15 14:48:06.302803] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:50.371 request: 00:05:50.371 { 00:05:50.371 "trtype": "tcp", 00:05:50.371 "method": "nvmf_get_transports", 00:05:50.371 "req_id": 1 00:05:50.371 } 00:05:50.371 Got JSON-RPC error response 00:05:50.371 response: 00:05:50.371 { 00:05:50.371 "code": -19, 00:05:50.371 "message": "No such device" 00:05:50.371 } 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.371 [2024-07-15 14:48:06.310920] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.371 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.632 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:50.632 { 00:05:50.632 "subsystems": [ 00:05:50.632 { 00:05:50.632 "subsystem": "keyring", 00:05:50.632 "config": [] 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "subsystem": "iobuf", 00:05:50.632 "config": [ 00:05:50.632 { 00:05:50.632 "method": "iobuf_set_options", 00:05:50.632 "params": { 00:05:50.632 "small_pool_count": 8192, 00:05:50.632 "large_pool_count": 1024, 00:05:50.632 "small_bufsize": 8192, 00:05:50.632 "large_bufsize": 135168 00:05:50.632 } 00:05:50.632 } 00:05:50.632 ] 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "subsystem": "sock", 00:05:50.632 "config": [ 00:05:50.632 { 00:05:50.632 "method": "sock_set_default_impl", 00:05:50.632 "params": { 00:05:50.632 "impl_name": "posix" 00:05:50.632 } 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "method": "sock_impl_set_options", 00:05:50.632 "params": { 00:05:50.632 "impl_name": "ssl", 00:05:50.632 "recv_buf_size": 4096, 00:05:50.632 "send_buf_size": 4096, 00:05:50.632 "enable_recv_pipe": true, 00:05:50.632 "enable_quickack": false, 00:05:50.632 "enable_placement_id": 0, 00:05:50.632 "enable_zerocopy_send_server": true, 00:05:50.632 "enable_zerocopy_send_client": false, 00:05:50.632 "zerocopy_threshold": 0, 00:05:50.632 "tls_version": 0, 00:05:50.632 "enable_ktls": false 00:05:50.632 } 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "method": "sock_impl_set_options", 00:05:50.632 "params": { 00:05:50.632 "impl_name": "posix", 00:05:50.632 "recv_buf_size": 2097152, 00:05:50.632 "send_buf_size": 2097152, 00:05:50.632 "enable_recv_pipe": true, 00:05:50.632 "enable_quickack": false, 00:05:50.632 "enable_placement_id": 0, 00:05:50.632 "enable_zerocopy_send_server": true, 00:05:50.632 "enable_zerocopy_send_client": false, 00:05:50.632 "zerocopy_threshold": 0, 00:05:50.632 "tls_version": 0, 00:05:50.632 "enable_ktls": false 00:05:50.632 } 00:05:50.632 } 00:05:50.632 ] 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "subsystem": "vmd", 00:05:50.632 "config": [] 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "subsystem": "accel", 00:05:50.632 "config": [ 00:05:50.632 { 00:05:50.632 "method": "accel_set_options", 00:05:50.632 "params": { 00:05:50.632 "small_cache_size": 128, 00:05:50.632 "large_cache_size": 16, 00:05:50.632 "task_count": 2048, 00:05:50.632 "sequence_count": 2048, 00:05:50.632 "buf_count": 2048 00:05:50.632 } 00:05:50.632 } 00:05:50.632 ] 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "subsystem": "bdev", 00:05:50.632 "config": [ 00:05:50.632 { 00:05:50.632 "method": "bdev_set_options", 00:05:50.632 "params": { 00:05:50.632 "bdev_io_pool_size": 65535, 00:05:50.632 "bdev_io_cache_size": 256, 00:05:50.632 "bdev_auto_examine": true, 00:05:50.632 "iobuf_small_cache_size": 128, 00:05:50.632 "iobuf_large_cache_size": 16 00:05:50.632 } 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "method": "bdev_raid_set_options", 00:05:50.632 "params": { 00:05:50.632 "process_window_size_kb": 1024 00:05:50.632 } 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "method": "bdev_iscsi_set_options", 00:05:50.632 "params": { 00:05:50.632 "timeout_sec": 30 00:05:50.632 } 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "method": "bdev_nvme_set_options", 00:05:50.632 "params": { 00:05:50.632 "action_on_timeout": "none", 00:05:50.632 "timeout_us": 0, 00:05:50.632 "timeout_admin_us": 0, 00:05:50.632 "keep_alive_timeout_ms": 10000, 00:05:50.632 "arbitration_burst": 0, 00:05:50.632 "low_priority_weight": 0, 00:05:50.632 "medium_priority_weight": 0, 00:05:50.632 "high_priority_weight": 0, 00:05:50.632 "nvme_adminq_poll_period_us": 10000, 00:05:50.632 "nvme_ioq_poll_period_us": 0, 00:05:50.632 "io_queue_requests": 0, 00:05:50.632 "delay_cmd_submit": true, 00:05:50.632 "transport_retry_count": 4, 00:05:50.632 "bdev_retry_count": 3, 00:05:50.632 "transport_ack_timeout": 0, 00:05:50.632 "ctrlr_loss_timeout_sec": 0, 00:05:50.632 "reconnect_delay_sec": 0, 00:05:50.632 "fast_io_fail_timeout_sec": 0, 00:05:50.632 "disable_auto_failback": false, 00:05:50.632 "generate_uuids": false, 00:05:50.632 "transport_tos": 0, 00:05:50.632 "nvme_error_stat": false, 00:05:50.632 "rdma_srq_size": 0, 00:05:50.632 "io_path_stat": false, 00:05:50.632 "allow_accel_sequence": false, 00:05:50.632 "rdma_max_cq_size": 0, 00:05:50.632 "rdma_cm_event_timeout_ms": 0, 00:05:50.632 "dhchap_digests": [ 00:05:50.632 "sha256", 00:05:50.632 "sha384", 00:05:50.632 "sha512" 00:05:50.632 ], 00:05:50.632 "dhchap_dhgroups": [ 00:05:50.632 "null", 00:05:50.632 "ffdhe2048", 00:05:50.632 "ffdhe3072", 00:05:50.632 "ffdhe4096", 00:05:50.632 "ffdhe6144", 00:05:50.632 "ffdhe8192" 00:05:50.632 ] 00:05:50.632 } 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "method": "bdev_nvme_set_hotplug", 00:05:50.632 "params": { 00:05:50.632 "period_us": 100000, 00:05:50.632 "enable": false 00:05:50.632 } 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "method": "bdev_wait_for_examine" 00:05:50.632 } 00:05:50.632 ] 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "subsystem": "scsi", 00:05:50.632 "config": null 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "subsystem": "scheduler", 00:05:50.632 "config": [ 00:05:50.632 { 00:05:50.632 "method": "framework_set_scheduler", 00:05:50.633 "params": { 00:05:50.633 "name": "static" 00:05:50.633 } 00:05:50.633 } 00:05:50.633 ] 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "subsystem": "vhost_scsi", 00:05:50.633 "config": [] 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "subsystem": "vhost_blk", 00:05:50.633 "config": [] 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "subsystem": "ublk", 00:05:50.633 "config": [] 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "subsystem": "nbd", 00:05:50.633 "config": [] 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "subsystem": "nvmf", 00:05:50.633 "config": [ 00:05:50.633 { 00:05:50.633 "method": "nvmf_set_config", 00:05:50.633 "params": { 00:05:50.633 "discovery_filter": "match_any", 00:05:50.633 "admin_cmd_passthru": { 00:05:50.633 "identify_ctrlr": false 00:05:50.633 } 00:05:50.633 } 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "method": "nvmf_set_max_subsystems", 00:05:50.633 "params": { 00:05:50.633 "max_subsystems": 1024 00:05:50.633 } 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "method": "nvmf_set_crdt", 00:05:50.633 "params": { 00:05:50.633 "crdt1": 0, 00:05:50.633 "crdt2": 0, 00:05:50.633 "crdt3": 0 00:05:50.633 } 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "method": "nvmf_create_transport", 00:05:50.633 "params": { 00:05:50.633 "trtype": "TCP", 00:05:50.633 "max_queue_depth": 128, 00:05:50.633 "max_io_qpairs_per_ctrlr": 127, 00:05:50.633 "in_capsule_data_size": 4096, 00:05:50.633 "max_io_size": 131072, 00:05:50.633 "io_unit_size": 131072, 00:05:50.633 "max_aq_depth": 128, 00:05:50.633 "num_shared_buffers": 511, 00:05:50.633 "buf_cache_size": 4294967295, 00:05:50.633 "dif_insert_or_strip": false, 00:05:50.633 "zcopy": false, 00:05:50.633 "c2h_success": true, 00:05:50.633 "sock_priority": 0, 00:05:50.633 "abort_timeout_sec": 1, 00:05:50.633 "ack_timeout": 0, 00:05:50.633 "data_wr_pool_size": 0 00:05:50.633 } 00:05:50.633 } 00:05:50.633 ] 00:05:50.633 }, 00:05:50.633 { 00:05:50.633 "subsystem": "iscsi", 00:05:50.633 "config": [ 00:05:50.633 { 00:05:50.633 "method": "iscsi_set_options", 00:05:50.633 "params": { 00:05:50.633 "node_base": "iqn.2016-06.io.spdk", 00:05:50.633 "max_sessions": 128, 00:05:50.633 "max_connections_per_session": 2, 00:05:50.633 "max_queue_depth": 64, 00:05:50.633 "default_time2wait": 2, 00:05:50.633 "default_time2retain": 20, 00:05:50.633 "first_burst_length": 8192, 00:05:50.633 "immediate_data": true, 00:05:50.633 "allow_duplicated_isid": false, 00:05:50.633 "error_recovery_level": 0, 00:05:50.633 "nop_timeout": 60, 00:05:50.633 "nop_in_interval": 30, 00:05:50.633 "disable_chap": false, 00:05:50.633 "require_chap": false, 00:05:50.633 "mutual_chap": false, 00:05:50.633 "chap_group": 0, 00:05:50.633 "max_large_datain_per_connection": 64, 00:05:50.633 "max_r2t_per_connection": 4, 00:05:50.633 "pdu_pool_size": 36864, 00:05:50.633 "immediate_data_pool_size": 16384, 00:05:50.633 "data_out_pool_size": 2048 00:05:50.633 } 00:05:50.633 } 00:05:50.633 ] 00:05:50.633 } 00:05:50.633 ] 00:05:50.633 } 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1616071 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1616071 ']' 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1616071 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1616071 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1616071' 00:05:50.633 killing process with pid 1616071 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1616071 00:05:50.633 14:48:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1616071 00:05:50.892 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1616176 00:05:50.892 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:50.892 14:48:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1616176 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1616176 ']' 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1616176 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1616176 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1616176' 00:05:56.173 killing process with pid 1616176 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1616176 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1616176 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:56.173 00:05:56.173 real 0m6.511s 00:05:56.173 user 0m6.373s 00:05:56.173 sys 0m0.518s 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.173 14:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.173 ************************************ 00:05:56.173 END TEST skip_rpc_with_json 00:05:56.173 ************************************ 00:05:56.173 14:48:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.173 14:48:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:56.173 14:48:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.173 14:48:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.173 14:48:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.173 ************************************ 00:05:56.173 START TEST skip_rpc_with_delay 00:05:56.173 ************************************ 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:56.173 [2024-07-15 14:48:12.120322] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:56.173 [2024-07-15 14:48:12.120416] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.173 00:05:56.173 real 0m0.080s 00:05:56.173 user 0m0.060s 00:05:56.173 sys 0m0.019s 00:05:56.173 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.174 14:48:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:56.174 ************************************ 00:05:56.174 END TEST skip_rpc_with_delay 00:05:56.174 ************************************ 00:05:56.174 14:48:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.174 14:48:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:56.174 14:48:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:56.174 14:48:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:56.174 14:48:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.174 14:48:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.174 14:48:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.174 ************************************ 00:05:56.174 START TEST exit_on_failed_rpc_init 00:05:56.174 ************************************ 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1617909 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1617909 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1617909 ']' 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.174 14:48:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.435 [2024-07-15 14:48:12.273961] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:56.435 [2024-07-15 14:48:12.274022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617909 ] 00:05:56.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.436 [2024-07-15 14:48:12.344377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.436 [2024-07-15 14:48:12.418776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:57.006 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.266 [2024-07-15 14:48:13.106716] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:57.266 [2024-07-15 14:48:13.106769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617999 ] 00:05:57.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.266 [2024-07-15 14:48:13.188205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.266 [2024-07-15 14:48:13.252245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.266 [2024-07-15 14:48:13.252307] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.266 [2024-07-15 14:48:13.252316] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.266 [2024-07-15 14:48:13.252324] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1617909 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1617909 ']' 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1617909 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.266 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1617909 00:05:57.525 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.525 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.525 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1617909' 00:05:57.525 killing process with pid 1617909 00:05:57.525 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1617909 00:05:57.525 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1617909 00:05:57.525 00:05:57.525 real 0m1.358s 00:05:57.525 user 0m1.569s 00:05:57.525 sys 0m0.400s 00:05:57.525 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.525 14:48:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.525 ************************************ 00:05:57.525 END TEST exit_on_failed_rpc_init 00:05:57.525 ************************************ 00:05:57.785 14:48:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.785 14:48:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:57.785 00:05:57.785 real 0m13.647s 00:05:57.785 user 0m13.223s 00:05:57.785 sys 0m1.463s 00:05:57.785 14:48:13 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.785 14:48:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.785 ************************************ 00:05:57.785 END TEST skip_rpc 00:05:57.785 ************************************ 00:05:57.785 14:48:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.785 14:48:13 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.785 14:48:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.785 14:48:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.785 14:48:13 -- common/autotest_common.sh@10 -- # set +x 00:05:57.785 ************************************ 00:05:57.785 START TEST rpc_client 00:05:57.785 ************************************ 00:05:57.785 14:48:13 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.785 * Looking for test storage... 00:05:57.785 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:57.785 14:48:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:57.785 OK 00:05:57.785 14:48:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:57.785 00:05:57.785 real 0m0.131s 00:05:57.785 user 0m0.055s 00:05:57.785 sys 0m0.084s 00:05:57.785 14:48:13 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.785 14:48:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.785 ************************************ 00:05:57.785 END TEST rpc_client 00:05:57.785 ************************************ 00:05:58.046 14:48:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.046 14:48:13 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.046 14:48:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.046 14:48:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.046 14:48:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.046 ************************************ 00:05:58.046 START TEST json_config 00:05:58.046 ************************************ 00:05:58.046 14:48:13 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:58.046 14:48:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.046 14:48:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.047 14:48:13 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:58.047 14:48:13 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.047 14:48:13 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.047 14:48:13 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.047 14:48:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.047 14:48:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.047 14:48:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.047 14:48:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:58.047 14:48:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@47 -- # : 0 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.047 14:48:14 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:58.047 INFO: JSON configuration test init 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 14:48:14 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:58.047 14:48:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:58.047 14:48:14 json_config -- json_config/common.sh@10 -- # shift 00:05:58.047 14:48:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:58.047 14:48:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:58.047 14:48:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:58.047 14:48:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.047 14:48:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:58.047 14:48:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1618399 00:05:58.047 14:48:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:58.047 Waiting for target to run... 00:05:58.047 14:48:14 json_config -- json_config/common.sh@25 -- # waitforlisten 1618399 /var/tmp/spdk_tgt.sock 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 1618399 ']' 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.047 14:48:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:58.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.047 14:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 [2024-07-15 14:48:14.082696] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:58.047 [2024-07-15 14:48:14.082776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618399 ] 00:05:58.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.569 [2024-07-15 14:48:14.400259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.569 [2024-07-15 14:48:14.457256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.829 14:48:14 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.829 14:48:14 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:58.829 14:48:14 json_config -- json_config/common.sh@26 -- # echo '' 00:05:58.829 00:05:58.829 14:48:14 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:58.829 14:48:14 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:58.829 14:48:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.829 14:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.829 14:48:14 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:58.829 14:48:14 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:58.829 14:48:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.829 14:48:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.089 14:48:14 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:59.089 14:48:14 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:59.089 14:48:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:59.660 14:48:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:59.660 14:48:15 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:59.660 14:48:15 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:59.660 14:48:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@296 -- # e810=() 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@297 -- # x722=() 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@298 -- # mlx=() 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:06:07.794 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:06:07.794 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:06:07.794 Found net devices under 0000:98:00.0: mlx_0_0 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:06:07.794 Found net devices under 0000:98:00.1: mlx_0_1 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@58 -- # uname 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:07.794 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:07.794 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:06:07.794 altname enp152s0f0np0 00:06:07.794 altname ens817f0np0 00:06:07.794 inet 192.168.100.8/24 scope global mlx_0_0 00:06:07.794 valid_lft forever preferred_lft forever 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:07.794 14:48:23 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:07.794 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:07.794 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:06:07.794 altname enp152s0f1np1 00:06:07.794 altname ens817f1np1 00:06:07.794 inet 192.168.100.9/24 scope global mlx_0_1 00:06:07.795 valid_lft forever preferred_lft forever 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@422 -- # return 0 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:07.795 192.168.100.9' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:07.795 192.168.100.9' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@457 -- # head -n 1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:07.795 192.168.100.9' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@458 -- # head -n 1 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:07.795 14:48:23 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:07.795 14:48:23 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:06:07.795 14:48:23 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.795 14:48:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.795 MallocForNvmf0 00:06:07.795 14:48:23 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.795 14:48:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:07.795 MallocForNvmf1 00:06:07.795 14:48:23 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:07.795 14:48:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:08.055 [2024-07-15 14:48:23.880735] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:08.055 [2024-07-15 14:48:23.915088] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbe8200/0xc15180) succeed. 00:06:08.055 [2024-07-15 14:48:23.929966] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbea3f0/0xc75140) succeed. 00:06:08.055 14:48:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.055 14:48:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.314 14:48:24 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.315 14:48:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.315 14:48:24 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.315 14:48:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:08.575 14:48:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:08.575 14:48:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:08.575 [2024-07-15 14:48:24.620941] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:08.835 14:48:24 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:08.835 14:48:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.835 14:48:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.835 14:48:24 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:08.835 14:48:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.835 14:48:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.835 14:48:24 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:08.835 14:48:24 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:08.835 14:48:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:08.835 MallocBdevForConfigChangeCheck 00:06:08.835 14:48:24 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:08.835 14:48:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.835 14:48:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.095 14:48:24 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:09.095 14:48:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.355 14:48:25 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:09.355 INFO: shutting down applications... 00:06:09.355 14:48:25 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:09.355 14:48:25 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:09.355 14:48:25 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:09.355 14:48:25 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:09.616 Calling clear_iscsi_subsystem 00:06:09.616 Calling clear_nvmf_subsystem 00:06:09.616 Calling clear_nbd_subsystem 00:06:09.616 Calling clear_ublk_subsystem 00:06:09.616 Calling clear_vhost_blk_subsystem 00:06:09.616 Calling clear_vhost_scsi_subsystem 00:06:09.616 Calling clear_bdev_subsystem 00:06:09.616 14:48:25 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:09.616 14:48:25 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:09.616 14:48:25 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:09.616 14:48:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.616 14:48:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:09.616 14:48:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:09.876 14:48:25 json_config -- json_config/json_config.sh@345 -- # break 00:06:09.876 14:48:25 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:09.876 14:48:25 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:09.876 14:48:25 json_config -- json_config/common.sh@31 -- # local app=target 00:06:09.876 14:48:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.876 14:48:25 json_config -- json_config/common.sh@35 -- # [[ -n 1618399 ]] 00:06:09.876 14:48:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1618399 00:06:09.876 14:48:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.876 14:48:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.876 14:48:25 json_config -- json_config/common.sh@41 -- # kill -0 1618399 00:06:09.876 14:48:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.445 14:48:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.445 14:48:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.445 14:48:26 json_config -- json_config/common.sh@41 -- # kill -0 1618399 00:06:10.445 14:48:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.445 14:48:26 json_config -- json_config/common.sh@43 -- # break 00:06:10.445 14:48:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.445 14:48:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.445 SPDK target shutdown done 00:06:10.445 14:48:26 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:10.445 INFO: relaunching applications... 00:06:10.445 14:48:26 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.445 14:48:26 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.445 14:48:26 json_config -- json_config/common.sh@10 -- # shift 00:06:10.445 14:48:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.445 14:48:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.445 14:48:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.445 14:48:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.445 14:48:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.445 14:48:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1623521 00:06:10.445 14:48:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.445 Waiting for target to run... 00:06:10.445 14:48:26 json_config -- json_config/common.sh@25 -- # waitforlisten 1623521 /var/tmp/spdk_tgt.sock 00:06:10.445 14:48:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.445 14:48:26 json_config -- common/autotest_common.sh@829 -- # '[' -z 1623521 ']' 00:06:10.445 14:48:26 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.445 14:48:26 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.445 14:48:26 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.445 14:48:26 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.445 14:48:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 [2024-07-15 14:48:26.494545] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:10.445 [2024-07-15 14:48:26.494600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623521 ] 00:06:10.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.966 [2024-07-15 14:48:26.773273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.966 [2024-07-15 14:48:26.825098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.538 [2024-07-15 14:48:27.361593] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa4320/0x190e800) succeed. 00:06:11.538 [2024-07-15 14:48:27.376030] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa8d80/0x198e880) succeed. 00:06:11.538 [2024-07-15 14:48:27.432119] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:11.538 14:48:27 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.538 14:48:27 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:11.538 14:48:27 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.538 00:06:11.538 14:48:27 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:11.538 14:48:27 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:11.538 INFO: Checking if target configuration is the same... 00:06:11.538 14:48:27 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.538 14:48:27 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:11.538 14:48:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.538 + '[' 2 -ne 2 ']' 00:06:11.538 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.538 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:11.538 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:11.538 +++ basename /dev/fd/62 00:06:11.538 ++ mktemp /tmp/62.XXX 00:06:11.538 + tmp_file_1=/tmp/62.MXm 00:06:11.538 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.538 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.538 + tmp_file_2=/tmp/spdk_tgt_config.json.2oP 00:06:11.538 + ret=0 00:06:11.538 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.798 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.798 + diff -u /tmp/62.MXm /tmp/spdk_tgt_config.json.2oP 00:06:11.798 + echo 'INFO: JSON config files are the same' 00:06:11.798 INFO: JSON config files are the same 00:06:11.798 + rm /tmp/62.MXm /tmp/spdk_tgt_config.json.2oP 00:06:11.798 + exit 0 00:06:11.798 14:48:27 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:11.798 14:48:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:11.798 INFO: changing configuration and checking if this can be detected... 00:06:11.798 14:48:27 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.798 14:48:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.059 14:48:27 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.059 14:48:27 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:12.059 14:48:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.059 + '[' 2 -ne 2 ']' 00:06:12.059 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.059 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:12.059 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:12.059 +++ basename /dev/fd/62 00:06:12.059 ++ mktemp /tmp/62.XXX 00:06:12.059 + tmp_file_1=/tmp/62.s90 00:06:12.059 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.059 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.059 + tmp_file_2=/tmp/spdk_tgt_config.json.7O2 00:06:12.059 + ret=0 00:06:12.059 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.320 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.320 + diff -u /tmp/62.s90 /tmp/spdk_tgt_config.json.7O2 00:06:12.320 + ret=1 00:06:12.320 + echo '=== Start of file: /tmp/62.s90 ===' 00:06:12.320 + cat /tmp/62.s90 00:06:12.320 + echo '=== End of file: /tmp/62.s90 ===' 00:06:12.320 + echo '' 00:06:12.320 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7O2 ===' 00:06:12.320 + cat /tmp/spdk_tgt_config.json.7O2 00:06:12.320 + echo '=== End of file: /tmp/spdk_tgt_config.json.7O2 ===' 00:06:12.320 + echo '' 00:06:12.320 + rm /tmp/62.s90 /tmp/spdk_tgt_config.json.7O2 00:06:12.320 + exit 1 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:12.320 INFO: configuration change detected. 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:12.320 14:48:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.320 14:48:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@317 -- # [[ -n 1623521 ]] 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:12.320 14:48:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.320 14:48:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:12.320 14:48:28 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:12.320 14:48:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.320 14:48:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.581 14:48:28 json_config -- json_config/json_config.sh@323 -- # killprocess 1623521 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@948 -- # '[' -z 1623521 ']' 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@952 -- # kill -0 1623521 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@953 -- # uname 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1623521 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1623521' 00:06:12.581 killing process with pid 1623521 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@967 -- # kill 1623521 00:06:12.581 14:48:28 json_config -- common/autotest_common.sh@972 -- # wait 1623521 00:06:12.841 14:48:28 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.841 14:48:28 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:12.841 14:48:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.841 14:48:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.841 14:48:28 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:12.841 14:48:28 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:12.841 INFO: Success 00:06:12.841 14:48:28 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:12.841 14:48:28 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:12.841 14:48:28 json_config -- nvmf/common.sh@117 -- # sync 00:06:12.841 14:48:28 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:12.841 14:48:28 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:12.841 14:48:28 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:12.841 14:48:28 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:12.841 14:48:28 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:12.841 00:06:12.841 real 0m14.929s 00:06:12.841 user 0m18.635s 00:06:12.841 sys 0m7.395s 00:06:12.841 14:48:28 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.841 14:48:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.841 ************************************ 00:06:12.841 END TEST json_config 00:06:12.841 ************************************ 00:06:12.841 14:48:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.841 14:48:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:12.841 14:48:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.841 14:48:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.841 14:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.102 ************************************ 00:06:13.102 START TEST json_config_extra_key 00:06:13.102 ************************************ 00:06:13.102 14:48:28 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:13.102 14:48:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.102 14:48:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:13.102 14:48:29 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.102 14:48:29 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.102 14:48:29 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.102 14:48:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.102 14:48:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.102 14:48:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.102 14:48:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:13.102 14:48:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.102 14:48:29 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.102 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:13.102 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:13.102 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:13.103 INFO: launching applications... 00:06:13.103 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1624265 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.103 Waiting for target to run... 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1624265 /var/tmp/spdk_tgt.sock 00:06:13.103 14:48:29 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1624265 ']' 00:06:13.103 14:48:29 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.103 14:48:29 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.103 14:48:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:13.103 14:48:29 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.103 14:48:29 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.103 14:48:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.103 [2024-07-15 14:48:29.071790] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:13.103 [2024-07-15 14:48:29.071867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624265 ] 00:06:13.103 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.362 [2024-07-15 14:48:29.337455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.362 [2024-07-15 14:48:29.389068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.932 14:48:29 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.932 14:48:29 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:13.932 00:06:13.932 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:13.932 INFO: shutting down applications... 00:06:13.932 14:48:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1624265 ]] 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1624265 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1624265 00:06:13.932 14:48:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.502 14:48:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.502 14:48:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.502 14:48:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1624265 00:06:14.502 14:48:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:14.502 14:48:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:14.502 14:48:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:14.502 14:48:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:14.502 SPDK target shutdown done 00:06:14.502 14:48:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:14.502 Success 00:06:14.502 00:06:14.502 real 0m1.429s 00:06:14.502 user 0m1.079s 00:06:14.502 sys 0m0.364s 00:06:14.502 14:48:30 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.502 14:48:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.502 ************************************ 00:06:14.502 END TEST json_config_extra_key 00:06:14.502 ************************************ 00:06:14.502 14:48:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.502 14:48:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:14.502 14:48:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.502 14:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.502 14:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:14.502 ************************************ 00:06:14.502 START TEST alias_rpc 00:06:14.502 ************************************ 00:06:14.502 14:48:30 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:14.502 * Looking for test storage... 00:06:14.502 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:14.502 14:48:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:14.502 14:48:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1624558 00:06:14.503 14:48:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1624558 00:06:14.503 14:48:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.503 14:48:30 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1624558 ']' 00:06:14.503 14:48:30 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.503 14:48:30 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.503 14:48:30 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.503 14:48:30 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.503 14:48:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.762 [2024-07-15 14:48:30.574152] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:14.762 [2024-07-15 14:48:30.574218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624558 ] 00:06:14.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.762 [2024-07-15 14:48:30.645660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.762 [2024-07-15 14:48:30.720722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.332 14:48:31 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.332 14:48:31 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.332 14:48:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:15.592 14:48:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1624558 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1624558 ']' 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1624558 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1624558 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1624558' 00:06:15.592 killing process with pid 1624558 00:06:15.592 14:48:31 alias_rpc -- common/autotest_common.sh@967 -- # kill 1624558 00:06:15.593 14:48:31 alias_rpc -- common/autotest_common.sh@972 -- # wait 1624558 00:06:15.853 00:06:15.853 real 0m1.365s 00:06:15.853 user 0m1.503s 00:06:15.853 sys 0m0.352s 00:06:15.853 14:48:31 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.853 14:48:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.853 ************************************ 00:06:15.853 END TEST alias_rpc 00:06:15.853 ************************************ 00:06:15.853 14:48:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.853 14:48:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:15.853 14:48:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:15.853 14:48:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.853 14:48:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.853 14:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.853 ************************************ 00:06:15.853 START TEST spdkcli_tcp 00:06:15.853 ************************************ 00:06:15.853 14:48:31 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.114 * Looking for test storage... 00:06:16.114 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1624827 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1624827 00:06:16.114 14:48:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1624827 ']' 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.114 14:48:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.114 [2024-07-15 14:48:32.022259] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.114 [2024-07-15 14:48:32.022330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624827 ] 00:06:16.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.114 [2024-07-15 14:48:32.093805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.114 [2024-07-15 14:48:32.168823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.114 [2024-07-15 14:48:32.168827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.055 14:48:32 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.055 14:48:32 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:17.055 14:48:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.055 14:48:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1625085 00:06:17.055 14:48:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.055 [ 00:06:17.055 "bdev_malloc_delete", 00:06:17.055 "bdev_malloc_create", 00:06:17.055 "bdev_null_resize", 00:06:17.055 "bdev_null_delete", 00:06:17.055 "bdev_null_create", 00:06:17.055 "bdev_nvme_cuse_unregister", 00:06:17.055 "bdev_nvme_cuse_register", 00:06:17.055 "bdev_opal_new_user", 00:06:17.055 "bdev_opal_set_lock_state", 00:06:17.055 "bdev_opal_delete", 00:06:17.055 "bdev_opal_get_info", 00:06:17.055 "bdev_opal_create", 00:06:17.055 "bdev_nvme_opal_revert", 00:06:17.055 "bdev_nvme_opal_init", 00:06:17.055 "bdev_nvme_send_cmd", 00:06:17.055 "bdev_nvme_get_path_iostat", 00:06:17.055 "bdev_nvme_get_mdns_discovery_info", 00:06:17.055 "bdev_nvme_stop_mdns_discovery", 00:06:17.055 "bdev_nvme_start_mdns_discovery", 00:06:17.055 "bdev_nvme_set_multipath_policy", 00:06:17.055 "bdev_nvme_set_preferred_path", 00:06:17.055 "bdev_nvme_get_io_paths", 00:06:17.055 "bdev_nvme_remove_error_injection", 00:06:17.055 "bdev_nvme_add_error_injection", 00:06:17.055 "bdev_nvme_get_discovery_info", 00:06:17.055 "bdev_nvme_stop_discovery", 00:06:17.055 "bdev_nvme_start_discovery", 00:06:17.055 "bdev_nvme_get_controller_health_info", 00:06:17.055 "bdev_nvme_disable_controller", 00:06:17.055 "bdev_nvme_enable_controller", 00:06:17.055 "bdev_nvme_reset_controller", 00:06:17.055 "bdev_nvme_get_transport_statistics", 00:06:17.055 "bdev_nvme_apply_firmware", 00:06:17.055 "bdev_nvme_detach_controller", 00:06:17.055 "bdev_nvme_get_controllers", 00:06:17.055 "bdev_nvme_attach_controller", 00:06:17.055 "bdev_nvme_set_hotplug", 00:06:17.055 "bdev_nvme_set_options", 00:06:17.055 "bdev_passthru_delete", 00:06:17.055 "bdev_passthru_create", 00:06:17.055 "bdev_lvol_set_parent_bdev", 00:06:17.055 "bdev_lvol_set_parent", 00:06:17.055 "bdev_lvol_check_shallow_copy", 00:06:17.055 "bdev_lvol_start_shallow_copy", 00:06:17.055 "bdev_lvol_grow_lvstore", 00:06:17.055 "bdev_lvol_get_lvols", 00:06:17.055 "bdev_lvol_get_lvstores", 00:06:17.055 "bdev_lvol_delete", 00:06:17.055 "bdev_lvol_set_read_only", 00:06:17.055 "bdev_lvol_resize", 00:06:17.055 "bdev_lvol_decouple_parent", 00:06:17.055 "bdev_lvol_inflate", 00:06:17.055 "bdev_lvol_rename", 00:06:17.055 "bdev_lvol_clone_bdev", 00:06:17.055 "bdev_lvol_clone", 00:06:17.055 "bdev_lvol_snapshot", 00:06:17.055 "bdev_lvol_create", 00:06:17.055 "bdev_lvol_delete_lvstore", 00:06:17.055 "bdev_lvol_rename_lvstore", 00:06:17.055 "bdev_lvol_create_lvstore", 00:06:17.055 "bdev_raid_set_options", 00:06:17.055 "bdev_raid_remove_base_bdev", 00:06:17.055 "bdev_raid_add_base_bdev", 00:06:17.055 "bdev_raid_delete", 00:06:17.055 "bdev_raid_create", 00:06:17.055 "bdev_raid_get_bdevs", 00:06:17.055 "bdev_error_inject_error", 00:06:17.055 "bdev_error_delete", 00:06:17.055 "bdev_error_create", 00:06:17.055 "bdev_split_delete", 00:06:17.055 "bdev_split_create", 00:06:17.055 "bdev_delay_delete", 00:06:17.055 "bdev_delay_create", 00:06:17.055 "bdev_delay_update_latency", 00:06:17.055 "bdev_zone_block_delete", 00:06:17.055 "bdev_zone_block_create", 00:06:17.055 "blobfs_create", 00:06:17.055 "blobfs_detect", 00:06:17.055 "blobfs_set_cache_size", 00:06:17.055 "bdev_aio_delete", 00:06:17.055 "bdev_aio_rescan", 00:06:17.055 "bdev_aio_create", 00:06:17.055 "bdev_ftl_set_property", 00:06:17.055 "bdev_ftl_get_properties", 00:06:17.055 "bdev_ftl_get_stats", 00:06:17.055 "bdev_ftl_unmap", 00:06:17.055 "bdev_ftl_unload", 00:06:17.055 "bdev_ftl_delete", 00:06:17.055 "bdev_ftl_load", 00:06:17.055 "bdev_ftl_create", 00:06:17.055 "bdev_virtio_attach_controller", 00:06:17.055 "bdev_virtio_scsi_get_devices", 00:06:17.055 "bdev_virtio_detach_controller", 00:06:17.055 "bdev_virtio_blk_set_hotplug", 00:06:17.055 "bdev_iscsi_delete", 00:06:17.055 "bdev_iscsi_create", 00:06:17.055 "bdev_iscsi_set_options", 00:06:17.055 "accel_error_inject_error", 00:06:17.055 "ioat_scan_accel_module", 00:06:17.055 "dsa_scan_accel_module", 00:06:17.055 "iaa_scan_accel_module", 00:06:17.055 "keyring_file_remove_key", 00:06:17.055 "keyring_file_add_key", 00:06:17.055 "keyring_linux_set_options", 00:06:17.055 "iscsi_get_histogram", 00:06:17.055 "iscsi_enable_histogram", 00:06:17.055 "iscsi_set_options", 00:06:17.055 "iscsi_get_auth_groups", 00:06:17.055 "iscsi_auth_group_remove_secret", 00:06:17.055 "iscsi_auth_group_add_secret", 00:06:17.055 "iscsi_delete_auth_group", 00:06:17.055 "iscsi_create_auth_group", 00:06:17.055 "iscsi_set_discovery_auth", 00:06:17.055 "iscsi_get_options", 00:06:17.055 "iscsi_target_node_request_logout", 00:06:17.055 "iscsi_target_node_set_redirect", 00:06:17.055 "iscsi_target_node_set_auth", 00:06:17.055 "iscsi_target_node_add_lun", 00:06:17.055 "iscsi_get_stats", 00:06:17.055 "iscsi_get_connections", 00:06:17.055 "iscsi_portal_group_set_auth", 00:06:17.055 "iscsi_start_portal_group", 00:06:17.055 "iscsi_delete_portal_group", 00:06:17.055 "iscsi_create_portal_group", 00:06:17.055 "iscsi_get_portal_groups", 00:06:17.055 "iscsi_delete_target_node", 00:06:17.055 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.055 "iscsi_target_node_add_pg_ig_maps", 00:06:17.055 "iscsi_create_target_node", 00:06:17.055 "iscsi_get_target_nodes", 00:06:17.055 "iscsi_delete_initiator_group", 00:06:17.055 "iscsi_initiator_group_remove_initiators", 00:06:17.055 "iscsi_initiator_group_add_initiators", 00:06:17.055 "iscsi_create_initiator_group", 00:06:17.055 "iscsi_get_initiator_groups", 00:06:17.055 "nvmf_set_crdt", 00:06:17.055 "nvmf_set_config", 00:06:17.055 "nvmf_set_max_subsystems", 00:06:17.055 "nvmf_stop_mdns_prr", 00:06:17.055 "nvmf_publish_mdns_prr", 00:06:17.055 "nvmf_subsystem_get_listeners", 00:06:17.055 "nvmf_subsystem_get_qpairs", 00:06:17.055 "nvmf_subsystem_get_controllers", 00:06:17.055 "nvmf_get_stats", 00:06:17.055 "nvmf_get_transports", 00:06:17.055 "nvmf_create_transport", 00:06:17.055 "nvmf_get_targets", 00:06:17.055 "nvmf_delete_target", 00:06:17.056 "nvmf_create_target", 00:06:17.056 "nvmf_subsystem_allow_any_host", 00:06:17.056 "nvmf_subsystem_remove_host", 00:06:17.056 "nvmf_subsystem_add_host", 00:06:17.056 "nvmf_ns_remove_host", 00:06:17.056 "nvmf_ns_add_host", 00:06:17.056 "nvmf_subsystem_remove_ns", 00:06:17.056 "nvmf_subsystem_add_ns", 00:06:17.056 "nvmf_subsystem_listener_set_ana_state", 00:06:17.056 "nvmf_discovery_get_referrals", 00:06:17.056 "nvmf_discovery_remove_referral", 00:06:17.056 "nvmf_discovery_add_referral", 00:06:17.056 "nvmf_subsystem_remove_listener", 00:06:17.056 "nvmf_subsystem_add_listener", 00:06:17.056 "nvmf_delete_subsystem", 00:06:17.056 "nvmf_create_subsystem", 00:06:17.056 "nvmf_get_subsystems", 00:06:17.056 "env_dpdk_get_mem_stats", 00:06:17.056 "nbd_get_disks", 00:06:17.056 "nbd_stop_disk", 00:06:17.056 "nbd_start_disk", 00:06:17.056 "ublk_recover_disk", 00:06:17.056 "ublk_get_disks", 00:06:17.056 "ublk_stop_disk", 00:06:17.056 "ublk_start_disk", 00:06:17.056 "ublk_destroy_target", 00:06:17.056 "ublk_create_target", 00:06:17.056 "virtio_blk_create_transport", 00:06:17.056 "virtio_blk_get_transports", 00:06:17.056 "vhost_controller_set_coalescing", 00:06:17.056 "vhost_get_controllers", 00:06:17.056 "vhost_delete_controller", 00:06:17.056 "vhost_create_blk_controller", 00:06:17.056 "vhost_scsi_controller_remove_target", 00:06:17.056 "vhost_scsi_controller_add_target", 00:06:17.056 "vhost_start_scsi_controller", 00:06:17.056 "vhost_create_scsi_controller", 00:06:17.056 "thread_set_cpumask", 00:06:17.056 "framework_get_governor", 00:06:17.056 "framework_get_scheduler", 00:06:17.056 "framework_set_scheduler", 00:06:17.056 "framework_get_reactors", 00:06:17.056 "thread_get_io_channels", 00:06:17.056 "thread_get_pollers", 00:06:17.056 "thread_get_stats", 00:06:17.056 "framework_monitor_context_switch", 00:06:17.056 "spdk_kill_instance", 00:06:17.056 "log_enable_timestamps", 00:06:17.056 "log_get_flags", 00:06:17.056 "log_clear_flag", 00:06:17.056 "log_set_flag", 00:06:17.056 "log_get_level", 00:06:17.056 "log_set_level", 00:06:17.056 "log_get_print_level", 00:06:17.056 "log_set_print_level", 00:06:17.056 "framework_enable_cpumask_locks", 00:06:17.056 "framework_disable_cpumask_locks", 00:06:17.056 "framework_wait_init", 00:06:17.056 "framework_start_init", 00:06:17.056 "scsi_get_devices", 00:06:17.056 "bdev_get_histogram", 00:06:17.056 "bdev_enable_histogram", 00:06:17.056 "bdev_set_qos_limit", 00:06:17.056 "bdev_set_qd_sampling_period", 00:06:17.056 "bdev_get_bdevs", 00:06:17.056 "bdev_reset_iostat", 00:06:17.056 "bdev_get_iostat", 00:06:17.056 "bdev_examine", 00:06:17.056 "bdev_wait_for_examine", 00:06:17.056 "bdev_set_options", 00:06:17.056 "notify_get_notifications", 00:06:17.056 "notify_get_types", 00:06:17.056 "accel_get_stats", 00:06:17.056 "accel_set_options", 00:06:17.056 "accel_set_driver", 00:06:17.056 "accel_crypto_key_destroy", 00:06:17.056 "accel_crypto_keys_get", 00:06:17.056 "accel_crypto_key_create", 00:06:17.056 "accel_assign_opc", 00:06:17.056 "accel_get_module_info", 00:06:17.056 "accel_get_opc_assignments", 00:06:17.056 "vmd_rescan", 00:06:17.056 "vmd_remove_device", 00:06:17.056 "vmd_enable", 00:06:17.056 "sock_get_default_impl", 00:06:17.056 "sock_set_default_impl", 00:06:17.056 "sock_impl_set_options", 00:06:17.056 "sock_impl_get_options", 00:06:17.056 "iobuf_get_stats", 00:06:17.056 "iobuf_set_options", 00:06:17.056 "framework_get_pci_devices", 00:06:17.056 "framework_get_config", 00:06:17.056 "framework_get_subsystems", 00:06:17.056 "trace_get_info", 00:06:17.056 "trace_get_tpoint_group_mask", 00:06:17.056 "trace_disable_tpoint_group", 00:06:17.056 "trace_enable_tpoint_group", 00:06:17.056 "trace_clear_tpoint_mask", 00:06:17.056 "trace_set_tpoint_mask", 00:06:17.056 "keyring_get_keys", 00:06:17.056 "spdk_get_version", 00:06:17.056 "rpc_get_methods" 00:06:17.056 ] 00:06:17.056 14:48:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.056 14:48:32 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.056 14:48:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.056 14:48:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.056 14:48:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1624827 00:06:17.056 14:48:32 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1624827 ']' 00:06:17.056 14:48:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1624827 00:06:17.056 14:48:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:17.056 14:48:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.056 14:48:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1624827 00:06:17.056 14:48:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.056 14:48:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.056 14:48:33 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1624827' 00:06:17.056 killing process with pid 1624827 00:06:17.056 14:48:33 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1624827 00:06:17.056 14:48:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1624827 00:06:17.316 00:06:17.316 real 0m1.403s 00:06:17.316 user 0m2.572s 00:06:17.316 sys 0m0.415s 00:06:17.316 14:48:33 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.316 14:48:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.316 ************************************ 00:06:17.316 END TEST spdkcli_tcp 00:06:17.316 ************************************ 00:06:17.316 14:48:33 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.316 14:48:33 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.316 14:48:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.316 14:48:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.316 14:48:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.316 ************************************ 00:06:17.316 START TEST dpdk_mem_utility 00:06:17.316 ************************************ 00:06:17.316 14:48:33 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.576 * Looking for test storage... 00:06:17.576 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:17.576 14:48:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:17.576 14:48:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1625159 00:06:17.576 14:48:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1625159 00:06:17.576 14:48:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.576 14:48:33 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1625159 ']' 00:06:17.576 14:48:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.576 14:48:33 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.576 14:48:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.576 14:48:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.576 14:48:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 [2024-07-15 14:48:33.485864] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:17.576 [2024-07-15 14:48:33.485939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625159 ] 00:06:17.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.576 [2024-07-15 14:48:33.557558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.576 [2024-07-15 14:48:33.632728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.518 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.518 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:18.518 14:48:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.518 14:48:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.518 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 { 00:06:18.518 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.518 } 00:06:18.518 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 14:48:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:18.518 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:18.518 1 heaps totaling size 814.000000 MiB 00:06:18.518 size: 814.000000 MiB heap id: 0 00:06:18.518 end heaps---------- 00:06:18.518 8 mempools totaling size 598.116089 MiB 00:06:18.518 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.518 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.518 size: 84.521057 MiB name: bdev_io_1625159 00:06:18.518 size: 51.011292 MiB name: evtpool_1625159 00:06:18.518 size: 50.003479 MiB name: msgpool_1625159 00:06:18.518 size: 21.763794 MiB name: PDU_Pool 00:06:18.518 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.518 size: 0.026123 MiB name: Session_Pool 00:06:18.518 end mempools------- 00:06:18.518 6 memzones totaling size 4.142822 MiB 00:06:18.518 size: 1.000366 MiB name: RG_ring_0_1625159 00:06:18.518 size: 1.000366 MiB name: RG_ring_1_1625159 00:06:18.518 size: 1.000366 MiB name: RG_ring_4_1625159 00:06:18.518 size: 1.000366 MiB name: RG_ring_5_1625159 00:06:18.518 size: 0.125366 MiB name: RG_ring_2_1625159 00:06:18.518 size: 0.015991 MiB name: RG_ring_3_1625159 00:06:18.518 end memzones------- 00:06:18.518 14:48:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.518 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:18.518 list of free elements. size: 12.519348 MiB 00:06:18.518 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:18.518 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:18.518 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:18.518 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:18.518 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:18.518 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:18.518 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:18.518 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:18.518 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:18.518 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:18.518 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:18.518 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:18.518 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:18.518 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:18.518 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:18.518 list of standard malloc elements. size: 199.218079 MiB 00:06:18.518 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:18.518 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:18.518 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:18.518 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:18.518 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:18.518 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:18.518 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:18.518 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:18.518 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:18.518 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:18.518 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:18.518 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:18.518 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:18.518 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:18.518 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:18.518 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:18.518 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:18.518 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:18.518 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:18.518 list of memzone associated elements. size: 602.262573 MiB 00:06:18.518 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:18.518 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.518 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:18.518 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.518 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:18.518 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1625159_0 00:06:18.518 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:18.518 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1625159_0 00:06:18.518 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:18.518 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1625159_0 00:06:18.518 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:18.518 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.518 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:18.519 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.519 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:18.519 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1625159 00:06:18.519 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:18.519 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1625159 00:06:18.519 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:18.519 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1625159 00:06:18.519 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:18.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.519 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:18.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.519 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:18.519 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.519 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:18.519 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.519 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:18.519 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1625159 00:06:18.519 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:18.519 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1625159 00:06:18.519 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:18.519 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1625159 00:06:18.519 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:18.519 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1625159 00:06:18.519 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:18.519 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1625159 00:06:18.519 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:18.519 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.519 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:18.519 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.519 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:18.519 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.519 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:18.519 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1625159 00:06:18.519 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:18.519 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.519 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:18.519 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.519 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:18.519 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1625159 00:06:18.519 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:18.519 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.519 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:18.519 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1625159 00:06:18.519 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:18.519 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1625159 00:06:18.519 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:18.519 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.519 14:48:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.519 14:48:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1625159 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1625159 ']' 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1625159 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1625159 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1625159' 00:06:18.519 killing process with pid 1625159 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1625159 00:06:18.519 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1625159 00:06:18.780 00:06:18.780 real 0m1.288s 00:06:18.780 user 0m1.341s 00:06:18.780 sys 0m0.387s 00:06:18.780 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.780 14:48:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.780 ************************************ 00:06:18.780 END TEST dpdk_mem_utility 00:06:18.780 ************************************ 00:06:18.780 14:48:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.780 14:48:34 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:18.780 14:48:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.780 14:48:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.780 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:18.780 ************************************ 00:06:18.780 START TEST event 00:06:18.780 ************************************ 00:06:18.780 14:48:34 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:18.780 * Looking for test storage... 00:06:18.780 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:18.780 14:48:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:18.780 14:48:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.780 14:48:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:18.780 14:48:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:18.780 14:48:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.780 14:48:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.780 ************************************ 00:06:18.780 START TEST event_perf 00:06:18.780 ************************************ 00:06:18.780 14:48:34 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.077 Running I/O for 1 seconds...[2024-07-15 14:48:34.846611] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:19.077 [2024-07-15 14:48:34.846716] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625551 ] 00:06:19.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.077 [2024-07-15 14:48:34.921061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.077 [2024-07-15 14:48:34.997274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.077 [2024-07-15 14:48:34.997340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.077 [2024-07-15 14:48:34.997505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.077 Running I/O for 1 seconds...[2024-07-15 14:48:34.997505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.046 00:06:20.046 lcore 0: 175888 00:06:20.046 lcore 1: 175887 00:06:20.046 lcore 2: 175886 00:06:20.046 lcore 3: 175888 00:06:20.046 done. 00:06:20.046 00:06:20.046 real 0m1.225s 00:06:20.046 user 0m4.133s 00:06:20.046 sys 0m0.087s 00:06:20.046 14:48:36 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.046 14:48:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.046 ************************************ 00:06:20.046 END TEST event_perf 00:06:20.046 ************************************ 00:06:20.046 14:48:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.046 14:48:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.046 14:48:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.046 14:48:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.046 14:48:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.306 ************************************ 00:06:20.306 START TEST event_reactor 00:06:20.306 ************************************ 00:06:20.306 14:48:36 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.306 [2024-07-15 14:48:36.148122] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:20.306 [2024-07-15 14:48:36.148198] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1625906 ] 00:06:20.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.306 [2024-07-15 14:48:36.218910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.306 [2024-07-15 14:48:36.288300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.687 test_start 00:06:21.687 oneshot 00:06:21.687 tick 100 00:06:21.687 tick 100 00:06:21.687 tick 250 00:06:21.687 tick 100 00:06:21.687 tick 100 00:06:21.687 tick 100 00:06:21.687 tick 250 00:06:21.687 tick 500 00:06:21.687 tick 100 00:06:21.687 tick 100 00:06:21.687 tick 250 00:06:21.687 tick 100 00:06:21.687 tick 100 00:06:21.687 test_end 00:06:21.687 00:06:21.687 real 0m1.213s 00:06:21.687 user 0m1.134s 00:06:21.687 sys 0m0.075s 00:06:21.687 14:48:37 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.687 14:48:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:21.687 ************************************ 00:06:21.687 END TEST event_reactor 00:06:21.687 ************************************ 00:06:21.687 14:48:37 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.687 14:48:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.687 14:48:37 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.687 14:48:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.687 14:48:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.687 ************************************ 00:06:21.687 START TEST event_reactor_perf 00:06:21.687 ************************************ 00:06:21.687 14:48:37 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.687 [2024-07-15 14:48:37.437835] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:21.687 [2024-07-15 14:48:37.437916] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626221 ] 00:06:21.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.687 [2024-07-15 14:48:37.509005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.687 [2024-07-15 14:48:37.579049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.627 test_start 00:06:22.627 test_end 00:06:22.627 Performance: 363791 events per second 00:06:22.627 00:06:22.627 real 0m1.214s 00:06:22.627 user 0m1.133s 00:06:22.627 sys 0m0.079s 00:06:22.627 14:48:38 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.627 14:48:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.627 ************************************ 00:06:22.627 END TEST event_reactor_perf 00:06:22.627 ************************************ 00:06:22.627 14:48:38 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.627 14:48:38 event -- event/event.sh@49 -- # uname -s 00:06:22.627 14:48:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:22.627 14:48:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:22.627 14:48:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.627 14:48:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.627 14:48:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.888 ************************************ 00:06:22.888 START TEST event_scheduler 00:06:22.888 ************************************ 00:06:22.888 14:48:38 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:22.888 * Looking for test storage... 00:06:22.888 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:22.888 14:48:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:22.888 14:48:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1626463 00:06:22.888 14:48:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.888 14:48:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:22.888 14:48:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1626463 00:06:22.888 14:48:38 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1626463 ']' 00:06:22.888 14:48:38 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.888 14:48:38 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.888 14:48:38 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.888 14:48:38 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.888 14:48:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.888 [2024-07-15 14:48:38.853766] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:22.888 [2024-07-15 14:48:38.853820] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626463 ] 00:06:22.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.888 [2024-07-15 14:48:38.912354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.148 [2024-07-15 14:48:38.969545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.148 [2024-07-15 14:48:38.969774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.148 [2024-07-15 14:48:38.969923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.148 [2024-07-15 14:48:38.969925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:23.719 14:48:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.719 [2024-07-15 14:48:39.635961] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:23.719 [2024-07-15 14:48:39.635975] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.719 [2024-07-15 14:48:39.635982] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.719 [2024-07-15 14:48:39.635987] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.719 [2024-07-15 14:48:39.635991] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.719 14:48:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.719 [2024-07-15 14:48:39.694615] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.719 14:48:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.719 14:48:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.719 ************************************ 00:06:23.719 START TEST scheduler_create_thread 00:06:23.719 ************************************ 00:06:23.719 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:23.719 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.720 2 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.720 3 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.720 4 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.720 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 5 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 6 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 7 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 8 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.981 9 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.981 14:48:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.242 10 00:06:24.242 14:48:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.242 14:48:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.242 14:48:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.242 14:48:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.625 14:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.625 14:48:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.625 14:48:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.625 14:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.625 14:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.583 14:48:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.583 14:48:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:26.583 14:48:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.583 14:48:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.154 14:48:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.154 14:48:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:27.154 14:48:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:27.154 14:48:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.154 14:48:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.094 14:48:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.094 00:06:28.094 real 0m4.223s 00:06:28.094 user 0m0.027s 00:06:28.094 sys 0m0.004s 00:06:28.094 14:48:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.094 14:48:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.094 ************************************ 00:06:28.094 END TEST scheduler_create_thread 00:06:28.094 ************************************ 00:06:28.094 14:48:43 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:28.094 14:48:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:28.094 14:48:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1626463 00:06:28.094 14:48:43 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1626463 ']' 00:06:28.094 14:48:43 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1626463 00:06:28.094 14:48:43 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:28.094 14:48:43 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.094 14:48:44 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1626463 00:06:28.094 14:48:44 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:28.094 14:48:44 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:28.094 14:48:44 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1626463' 00:06:28.094 killing process with pid 1626463 00:06:28.094 14:48:44 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1626463 00:06:28.094 14:48:44 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1626463 00:06:28.353 [2024-07-15 14:48:44.235797] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.353 00:06:28.353 real 0m5.695s 00:06:28.353 user 0m12.727s 00:06:28.353 sys 0m0.354s 00:06:28.353 14:48:44 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.353 14:48:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.353 ************************************ 00:06:28.353 END TEST event_scheduler 00:06:28.353 ************************************ 00:06:28.613 14:48:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:28.613 14:48:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.613 14:48:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.613 14:48:44 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.613 14:48:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.613 14:48:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.613 ************************************ 00:06:28.613 START TEST app_repeat 00:06:28.613 ************************************ 00:06:28.613 14:48:44 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1627702 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.613 14:48:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1627702' 00:06:28.614 Process app_repeat pid: 1627702 00:06:28.614 14:48:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.614 14:48:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.614 spdk_app_start Round 0 00:06:28.614 14:48:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1627702 /var/tmp/spdk-nbd.sock 00:06:28.614 14:48:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1627702 ']' 00:06:28.614 14:48:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.614 14:48:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.614 14:48:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.614 14:48:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.614 14:48:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.614 [2024-07-15 14:48:44.524406] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:28.614 [2024-07-15 14:48:44.524486] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627702 ] 00:06:28.614 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.614 [2024-07-15 14:48:44.593774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.614 [2024-07-15 14:48:44.664331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.614 [2024-07-15 14:48:44.664460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.554 14:48:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.554 14:48:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.554 14:48:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.554 Malloc0 00:06:29.554 14:48:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.815 Malloc1 00:06:29.815 14:48:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.815 /dev/nbd0 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.815 14:48:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.815 1+0 records in 00:06:29.815 1+0 records out 00:06:29.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273149 s, 15.0 MB/s 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:29.815 14:48:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:29.816 14:48:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:29.816 14:48:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.816 14:48:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:29.816 14:48:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.816 14:48:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.816 14:48:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.076 /dev/nbd1 00:06:30.076 14:48:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.076 14:48:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.076 14:48:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.076 14:48:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.076 14:48:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.076 14:48:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.076 14:48:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.076 14:48:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.077 1+0 records in 00:06:30.077 1+0 records out 00:06:30.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271816 s, 15.1 MB/s 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.077 14:48:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.077 14:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.077 14:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.077 14:48:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.077 14:48:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.077 14:48:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.339 { 00:06:30.339 "nbd_device": "/dev/nbd0", 00:06:30.339 "bdev_name": "Malloc0" 00:06:30.339 }, 00:06:30.339 { 00:06:30.339 "nbd_device": "/dev/nbd1", 00:06:30.339 "bdev_name": "Malloc1" 00:06:30.339 } 00:06:30.339 ]' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.339 { 00:06:30.339 "nbd_device": "/dev/nbd0", 00:06:30.339 "bdev_name": "Malloc0" 00:06:30.339 }, 00:06:30.339 { 00:06:30.339 "nbd_device": "/dev/nbd1", 00:06:30.339 "bdev_name": "Malloc1" 00:06:30.339 } 00:06:30.339 ]' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.339 /dev/nbd1' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.339 /dev/nbd1' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.339 256+0 records in 00:06:30.339 256+0 records out 00:06:30.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123657 s, 84.8 MB/s 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.339 256+0 records in 00:06:30.339 256+0 records out 00:06:30.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155859 s, 67.3 MB/s 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.339 256+0 records in 00:06:30.339 256+0 records out 00:06:30.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166531 s, 63.0 MB/s 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.339 14:48:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.614 14:48:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.874 14:48:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.874 14:48:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.134 14:48:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.134 [2024-07-15 14:48:47.163061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.394 [2024-07-15 14:48:47.227779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.394 [2024-07-15 14:48:47.227782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.394 [2024-07-15 14:48:47.259166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.395 [2024-07-15 14:48:47.259202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.695 14:48:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.695 14:48:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.695 spdk_app_start Round 1 00:06:34.695 14:48:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1627702 /var/tmp/spdk-nbd.sock 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1627702 ']' 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:34.695 14:48:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.695 Malloc0 00:06:34.695 14:48:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.695 Malloc1 00:06:34.695 14:48:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.695 /dev/nbd0 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.695 14:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.695 1+0 records in 00:06:34.695 1+0 records out 00:06:34.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266579 s, 15.4 MB/s 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:34.695 14:48:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.696 14:48:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.696 14:48:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:34.696 14:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.696 14:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.696 14:48:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.954 /dev/nbd1 00:06:34.954 14:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.954 14:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.954 1+0 records in 00:06:34.954 1+0 records out 00:06:34.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250768 s, 16.3 MB/s 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.954 14:48:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:34.954 14:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.954 14:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.954 14:48:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.954 14:48:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.954 14:48:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.214 { 00:06:35.214 "nbd_device": "/dev/nbd0", 00:06:35.214 "bdev_name": "Malloc0" 00:06:35.214 }, 00:06:35.214 { 00:06:35.214 "nbd_device": "/dev/nbd1", 00:06:35.214 "bdev_name": "Malloc1" 00:06:35.214 } 00:06:35.214 ]' 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.214 { 00:06:35.214 "nbd_device": "/dev/nbd0", 00:06:35.214 "bdev_name": "Malloc0" 00:06:35.214 }, 00:06:35.214 { 00:06:35.214 "nbd_device": "/dev/nbd1", 00:06:35.214 "bdev_name": "Malloc1" 00:06:35.214 } 00:06:35.214 ]' 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.214 /dev/nbd1' 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.214 /dev/nbd1' 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.214 256+0 records in 00:06:35.214 256+0 records out 00:06:35.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117886 s, 88.9 MB/s 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.214 14:48:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.214 256+0 records in 00:06:35.214 256+0 records out 00:06:35.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160157 s, 65.5 MB/s 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.215 256+0 records in 00:06:35.215 256+0 records out 00:06:35.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165704 s, 63.3 MB/s 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.215 14:48:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.475 14:48:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.476 14:48:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.741 14:48:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.741 14:48:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.001 14:48:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:36.001 [2024-07-15 14:48:52.020536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.261 [2024-07-15 14:48:52.084932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.261 [2024-07-15 14:48:52.084934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.261 [2024-07-15 14:48:52.117259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.261 [2024-07-15 14:48:52.117293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.559 14:48:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.559 14:48:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:39.559 spdk_app_start Round 2 00:06:39.559 14:48:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1627702 /var/tmp/spdk-nbd.sock 00:06:39.559 14:48:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1627702 ']' 00:06:39.559 14:48:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.559 14:48:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.559 14:48:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.559 14:48:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.559 14:48:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:39.559 14:48:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.559 Malloc0 00:06:39.559 14:48:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.559 Malloc1 00:06:39.559 14:48:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.559 /dev/nbd0 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.559 1+0 records in 00:06:39.559 1+0 records out 00:06:39.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280353 s, 14.6 MB/s 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.559 14:48:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.559 14:48:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.821 /dev/nbd1 00:06:39.821 14:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.821 14:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.821 1+0 records in 00:06:39.821 1+0 records out 00:06:39.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281317 s, 14.6 MB/s 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.821 14:48:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.821 14:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.821 14:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.821 14:48:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.821 14:48:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.821 14:48:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.082 { 00:06:40.082 "nbd_device": "/dev/nbd0", 00:06:40.082 "bdev_name": "Malloc0" 00:06:40.082 }, 00:06:40.082 { 00:06:40.082 "nbd_device": "/dev/nbd1", 00:06:40.082 "bdev_name": "Malloc1" 00:06:40.082 } 00:06:40.082 ]' 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.082 { 00:06:40.082 "nbd_device": "/dev/nbd0", 00:06:40.082 "bdev_name": "Malloc0" 00:06:40.082 }, 00:06:40.082 { 00:06:40.082 "nbd_device": "/dev/nbd1", 00:06:40.082 "bdev_name": "Malloc1" 00:06:40.082 } 00:06:40.082 ]' 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.082 /dev/nbd1' 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.082 /dev/nbd1' 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.082 256+0 records in 00:06:40.082 256+0 records out 00:06:40.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116589 s, 89.9 MB/s 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.082 256+0 records in 00:06:40.082 256+0 records out 00:06:40.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157798 s, 66.5 MB/s 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.082 256+0 records in 00:06:40.082 256+0 records out 00:06:40.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168721 s, 62.1 MB/s 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.082 14:48:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.082 14:48:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.343 14:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.604 14:48:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.604 14:48:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.864 14:48:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.864 [2024-07-15 14:48:56.843556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.864 [2024-07-15 14:48:56.907177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.864 [2024-07-15 14:48:56.907181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.128 [2024-07-15 14:48:56.938611] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.128 [2024-07-15 14:48:56.938645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.722 14:48:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1627702 /var/tmp/spdk-nbd.sock 00:06:43.722 14:48:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1627702 ']' 00:06:43.722 14:48:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.722 14:48:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.722 14:48:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.722 14:48:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.722 14:48:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.984 14:48:59 event.app_repeat -- event/event.sh@39 -- # killprocess 1627702 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1627702 ']' 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1627702 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1627702 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1627702' 00:06:43.984 killing process with pid 1627702 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1627702 00:06:43.984 14:48:59 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1627702 00:06:43.984 spdk_app_start is called in Round 0. 00:06:43.984 Shutdown signal received, stop current app iteration 00:06:43.984 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:43.984 spdk_app_start is called in Round 1. 00:06:43.984 Shutdown signal received, stop current app iteration 00:06:43.984 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:43.984 spdk_app_start is called in Round 2. 00:06:43.984 Shutdown signal received, stop current app iteration 00:06:43.984 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:43.984 spdk_app_start is called in Round 3. 00:06:43.984 Shutdown signal received, stop current app iteration 00:06:43.984 14:49:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.984 14:49:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.984 00:06:43.984 real 0m15.553s 00:06:43.984 user 0m33.544s 00:06:43.984 sys 0m2.120s 00:06:43.984 14:49:00 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.984 14:49:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.984 ************************************ 00:06:43.984 END TEST app_repeat 00:06:43.984 ************************************ 00:06:44.245 14:49:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:44.245 14:49:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:44.245 14:49:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:44.245 14:49:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.245 14:49:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.245 14:49:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.245 ************************************ 00:06:44.245 START TEST cpu_locks 00:06:44.245 ************************************ 00:06:44.245 14:49:00 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:44.245 * Looking for test storage... 00:06:44.245 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:44.245 14:49:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:44.245 14:49:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:44.245 14:49:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:44.245 14:49:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:44.245 14:49:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.245 14:49:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.245 14:49:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.245 ************************************ 00:06:44.245 START TEST default_locks 00:06:44.245 ************************************ 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1630963 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1630963 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1630963 ']' 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.245 14:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.245 [2024-07-15 14:49:00.305185] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:44.245 [2024-07-15 14:49:00.305247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630963 ] 00:06:44.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.507 [2024-07-15 14:49:00.371237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.507 [2024-07-15 14:49:00.435988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.080 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.080 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:45.080 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1630963 00:06:45.080 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1630963 00:06:45.080 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.653 lslocks: write error 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1630963 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1630963 ']' 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1630963 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1630963 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1630963' 00:06:45.653 killing process with pid 1630963 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1630963 00:06:45.653 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1630963 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1630963 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1630963 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1630963 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1630963 ']' 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.916 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1630963) - No such process 00:06:45.916 ERROR: process (pid: 1630963) is no longer running 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.916 00:06:45.916 real 0m1.548s 00:06:45.916 user 0m1.657s 00:06:45.916 sys 0m0.517s 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.916 14:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.916 ************************************ 00:06:45.916 END TEST default_locks 00:06:45.916 ************************************ 00:06:45.916 14:49:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.916 14:49:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:45.916 14:49:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.916 14:49:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.916 14:49:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.916 ************************************ 00:06:45.916 START TEST default_locks_via_rpc 00:06:45.916 ************************************ 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1631324 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1631324 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1631324 ']' 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.916 14:49:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.916 [2024-07-15 14:49:01.922625] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:45.916 [2024-07-15 14:49:01.922673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631324 ] 00:06:45.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.177 [2024-07-15 14:49:01.989187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.177 [2024-07-15 14:49:02.055440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1631324 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1631324 00:06:46.749 14:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1631324 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1631324 ']' 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1631324 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631324 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631324' 00:06:47.329 killing process with pid 1631324 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1631324 00:06:47.329 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1631324 00:06:47.589 00:06:47.589 real 0m1.713s 00:06:47.589 user 0m1.813s 00:06:47.589 sys 0m0.550s 00:06:47.589 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.589 14:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.589 ************************************ 00:06:47.589 END TEST default_locks_via_rpc 00:06:47.589 ************************************ 00:06:47.589 14:49:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.589 14:49:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:47.589 14:49:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.589 14:49:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.589 14:49:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.589 ************************************ 00:06:47.589 START TEST non_locking_app_on_locked_coremask 00:06:47.589 ************************************ 00:06:47.589 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:47.590 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1631704 00:06:47.590 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1631704 /var/tmp/spdk.sock 00:06:47.851 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1631704 ']' 00:06:47.851 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.851 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.851 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.851 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.851 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.851 14:49:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.851 [2024-07-15 14:49:03.704255] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:47.851 [2024-07-15 14:49:03.704305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631704 ] 00:06:47.851 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.851 [2024-07-15 14:49:03.770875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.851 [2024-07-15 14:49:03.838943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1632019 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1632019 /var/tmp/spdk2.sock 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1632019 ']' 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.424 14:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:48.686 [2024-07-15 14:49:04.508472] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:48.686 [2024-07-15 14:49:04.508526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632019 ] 00:06:48.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.686 [2024-07-15 14:49:04.607433] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.686 [2024-07-15 14:49:04.607463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.686 [2024-07-15 14:49:04.736674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.260 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.260 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.260 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1631704 00:06:49.260 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1631704 00:06:49.260 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.832 lslocks: write error 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1631704 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1631704 ']' 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1631704 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1631704 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1631704' 00:06:49.832 killing process with pid 1631704 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1631704 00:06:49.832 14:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1631704 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1632019 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1632019 ']' 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1632019 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632019 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632019' 00:06:50.406 killing process with pid 1632019 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1632019 00:06:50.406 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1632019 00:06:50.672 00:06:50.672 real 0m2.850s 00:06:50.672 user 0m3.109s 00:06:50.672 sys 0m0.817s 00:06:50.672 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.672 14:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.672 ************************************ 00:06:50.672 END TEST non_locking_app_on_locked_coremask 00:06:50.672 ************************************ 00:06:50.672 14:49:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:50.672 14:49:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:50.672 14:49:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.672 14:49:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.672 14:49:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.672 ************************************ 00:06:50.672 START TEST locking_app_on_unlocked_coremask 00:06:50.672 ************************************ 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1632394 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1632394 /var/tmp/spdk.sock 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1632394 ']' 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.672 14:49:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:50.672 [2024-07-15 14:49:06.619441] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:50.672 [2024-07-15 14:49:06.619490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632394 ] 00:06:50.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.672 [2024-07-15 14:49:06.684373] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.672 [2024-07-15 14:49:06.684403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.978 [2024-07-15 14:49:06.749639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1632599 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1632599 /var/tmp/spdk2.sock 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1632599 ']' 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.577 14:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:51.577 [2024-07-15 14:49:07.420805] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:51.577 [2024-07-15 14:49:07.420860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632599 ] 00:06:51.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.577 [2024-07-15 14:49:07.520648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.839 [2024-07-15 14:49:07.649789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.411 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.411 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.411 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1632599 00:06:52.411 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1632599 00:06:52.411 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.982 lslocks: write error 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1632394 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1632394 ']' 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1632394 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632394 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632394' 00:06:52.982 killing process with pid 1632394 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1632394 00:06:52.982 14:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1632394 00:06:53.243 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1632599 00:06:53.243 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1632599 ']' 00:06:53.243 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1632599 00:06:53.243 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.243 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.243 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632599 00:06:53.503 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.503 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.503 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632599' 00:06:53.503 killing process with pid 1632599 00:06:53.503 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1632599 00:06:53.503 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1632599 00:06:53.503 00:06:53.503 real 0m2.959s 00:06:53.503 user 0m3.235s 00:06:53.503 sys 0m0.858s 00:06:53.503 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.503 14:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.503 ************************************ 00:06:53.503 END TEST locking_app_on_unlocked_coremask 00:06:53.503 ************************************ 00:06:53.503 14:49:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.503 14:49:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:53.503 14:49:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.503 14:49:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.503 14:49:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.764 ************************************ 00:06:53.764 START TEST locking_app_on_locked_coremask 00:06:53.764 ************************************ 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1633104 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1633104 /var/tmp/spdk.sock 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1633104 ']' 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.764 14:49:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.764 [2024-07-15 14:49:09.651116] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:53.764 [2024-07-15 14:49:09.651166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633104 ] 00:06:53.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.764 [2024-07-15 14:49:09.716897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.764 [2024-07-15 14:49:09.781974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.705 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.705 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1633150 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1633150 /var/tmp/spdk2.sock 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1633150 /var/tmp/spdk2.sock 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1633150 /var/tmp/spdk2.sock 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1633150 ']' 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.706 14:49:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.706 [2024-07-15 14:49:10.476041] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:54.706 [2024-07-15 14:49:10.476093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633150 ] 00:06:54.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.706 [2024-07-15 14:49:10.576058] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1633104 has claimed it. 00:06:54.706 [2024-07-15 14:49:10.576107] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.278 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1633150) - No such process 00:06:55.278 ERROR: process (pid: 1633150) is no longer running 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1633104 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1633104 00:06:55.278 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.538 lslocks: write error 00:06:55.538 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1633104 00:06:55.538 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1633104 ']' 00:06:55.538 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1633104 00:06:55.538 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.538 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.538 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1633104 00:06:55.798 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.798 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.799 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1633104' 00:06:55.799 killing process with pid 1633104 00:06:55.799 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1633104 00:06:55.799 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1633104 00:06:55.799 00:06:55.799 real 0m2.245s 00:06:55.799 user 0m2.492s 00:06:55.799 sys 0m0.623s 00:06:55.799 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.799 14:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.799 ************************************ 00:06:55.799 END TEST locking_app_on_locked_coremask 00:06:55.799 ************************************ 00:06:56.060 14:49:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.060 14:49:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:56.060 14:49:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.060 14:49:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.060 14:49:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.060 ************************************ 00:06:56.060 START TEST locking_overlapped_coremask 00:06:56.060 ************************************ 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1633483 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1633483 /var/tmp/spdk.sock 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1633483 ']' 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.060 14:49:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.060 [2024-07-15 14:49:11.970436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:56.060 [2024-07-15 14:49:11.970486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633483 ] 00:06:56.060 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.060 [2024-07-15 14:49:12.037468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.060 [2024-07-15 14:49:12.104067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.060 [2024-07-15 14:49:12.104187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.060 [2024-07-15 14:49:12.104190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1633817 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1633817 /var/tmp/spdk2.sock 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1633817 /var/tmp/spdk2.sock 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1633817 /var/tmp/spdk2.sock 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1633817 ']' 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.002 14:49:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.002 [2024-07-15 14:49:12.780296] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:57.002 [2024-07-15 14:49:12.780346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633817 ] 00:06:57.002 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.002 [2024-07-15 14:49:12.862566] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1633483 has claimed it. 00:06:57.002 [2024-07-15 14:49:12.862601] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.571 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1633817) - No such process 00:06:57.571 ERROR: process (pid: 1633817) is no longer running 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.571 14:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1633483 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1633483 ']' 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1633483 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1633483 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1633483' 00:06:57.572 killing process with pid 1633483 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1633483 00:06:57.572 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1633483 00:06:57.831 00:06:57.831 real 0m1.745s 00:06:57.831 user 0m4.944s 00:06:57.831 sys 0m0.350s 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.832 ************************************ 00:06:57.832 END TEST locking_overlapped_coremask 00:06:57.832 ************************************ 00:06:57.832 14:49:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.832 14:49:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:57.832 14:49:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.832 14:49:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.832 14:49:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.832 ************************************ 00:06:57.832 START TEST locking_overlapped_coremask_via_rpc 00:06:57.832 ************************************ 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1633881 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1633881 /var/tmp/spdk.sock 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1633881 ']' 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.832 14:49:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:57.832 [2024-07-15 14:49:13.780883] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:57.832 [2024-07-15 14:49:13.780933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633881 ] 00:06:57.832 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.832 [2024-07-15 14:49:13.847824] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.832 [2024-07-15 14:49:13.847853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.091 [2024-07-15 14:49:13.916076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.091 [2024-07-15 14:49:13.916189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.091 [2024-07-15 14:49:13.916192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1634190 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1634190 /var/tmp/spdk2.sock 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1634190 ']' 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.660 14:49:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.660 [2024-07-15 14:49:14.598568] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.660 [2024-07-15 14:49:14.598618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634190 ] 00:06:58.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.660 [2024-07-15 14:49:14.679387] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.660 [2024-07-15 14:49:14.679411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.919 [2024-07-15 14:49:14.785292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.919 [2024-07-15 14:49:14.789351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.919 [2024-07-15 14:49:14.789354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.488 [2024-07-15 14:49:15.377291] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1633881 has claimed it. 00:06:59.488 request: 00:06:59.488 { 00:06:59.488 "method": "framework_enable_cpumask_locks", 00:06:59.488 "req_id": 1 00:06:59.488 } 00:06:59.488 Got JSON-RPC error response 00:06:59.488 response: 00:06:59.488 { 00:06:59.488 "code": -32603, 00:06:59.488 "message": "Failed to claim CPU core: 2" 00:06:59.488 } 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1633881 /var/tmp/spdk.sock 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1633881 ']' 00:06:59.488 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.489 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.489 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.489 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.489 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1634190 /var/tmp/spdk2.sock 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1634190 ']' 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.748 00:06:59.748 real 0m1.999s 00:06:59.748 user 0m0.747s 00:06:59.748 sys 0m0.178s 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.748 14:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.748 ************************************ 00:06:59.748 END TEST locking_overlapped_coremask_via_rpc 00:06:59.748 ************************************ 00:06:59.748 14:49:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.748 14:49:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:59.748 14:49:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1633881 ]] 00:06:59.748 14:49:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1633881 00:06:59.748 14:49:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1633881 ']' 00:06:59.748 14:49:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1633881 00:06:59.748 14:49:15 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:59.748 14:49:15 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.748 14:49:15 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1633881 00:07:00.008 14:49:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.008 14:49:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.008 14:49:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1633881' 00:07:00.008 killing process with pid 1633881 00:07:00.008 14:49:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1633881 00:07:00.008 14:49:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1633881 00:07:00.008 14:49:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1634190 ]] 00:07:00.008 14:49:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1634190 00:07:00.008 14:49:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1634190 ']' 00:07:00.008 14:49:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1634190 00:07:00.008 14:49:16 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:00.008 14:49:16 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.008 14:49:16 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1634190 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1634190' 00:07:00.268 killing process with pid 1634190 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1634190 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1634190 00:07:00.268 14:49:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.268 14:49:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.268 14:49:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1633881 ]] 00:07:00.268 14:49:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1633881 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1633881 ']' 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1633881 00:07:00.268 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1633881) - No such process 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1633881 is not found' 00:07:00.268 Process with pid 1633881 is not found 00:07:00.268 14:49:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1634190 ]] 00:07:00.268 14:49:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1634190 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1634190 ']' 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1634190 00:07:00.268 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1634190) - No such process 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1634190 is not found' 00:07:00.268 Process with pid 1634190 is not found 00:07:00.268 14:49:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.268 00:07:00.268 real 0m16.174s 00:07:00.268 user 0m27.578s 00:07:00.268 sys 0m4.710s 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.268 14:49:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.268 ************************************ 00:07:00.268 END TEST cpu_locks 00:07:00.268 ************************************ 00:07:00.268 14:49:16 event -- common/autotest_common.sh@1142 -- # return 0 00:07:00.529 00:07:00.529 real 0m41.641s 00:07:00.529 user 1m20.455s 00:07:00.529 sys 0m7.816s 00:07:00.529 14:49:16 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.529 14:49:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.529 ************************************ 00:07:00.529 END TEST event 00:07:00.529 ************************************ 00:07:00.529 14:49:16 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.529 14:49:16 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:00.529 14:49:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.529 14:49:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.529 14:49:16 -- common/autotest_common.sh@10 -- # set +x 00:07:00.529 ************************************ 00:07:00.529 START TEST thread 00:07:00.529 ************************************ 00:07:00.529 14:49:16 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:00.529 * Looking for test storage... 00:07:00.529 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:00.529 14:49:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.529 14:49:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:00.529 14:49:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.529 14:49:16 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.529 ************************************ 00:07:00.529 START TEST thread_poller_perf 00:07:00.529 ************************************ 00:07:00.529 14:49:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.529 [2024-07-15 14:49:16.563625] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:00.529 [2024-07-15 14:49:16.563731] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634630 ] 00:07:00.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.789 [2024-07-15 14:49:16.637467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.789 [2024-07-15 14:49:16.710334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.789 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.728 ====================================== 00:07:01.728 busy:2413033290 (cyc) 00:07:01.728 total_run_count: 288000 00:07:01.728 tsc_hz: 2400000000 (cyc) 00:07:01.728 ====================================== 00:07:01.728 poller_cost: 8378 (cyc), 3490 (nsec) 00:07:01.728 00:07:01.728 real 0m1.231s 00:07:01.728 user 0m1.147s 00:07:01.728 sys 0m0.080s 00:07:01.728 14:49:17 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.728 14:49:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.728 ************************************ 00:07:01.728 END TEST thread_poller_perf 00:07:01.728 ************************************ 00:07:01.989 14:49:17 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:01.989 14:49:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.989 14:49:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:01.989 14:49:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.989 14:49:17 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.989 ************************************ 00:07:01.989 START TEST thread_poller_perf 00:07:01.989 ************************************ 00:07:01.989 14:49:17 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.989 [2024-07-15 14:49:17.871864] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:01.989 [2024-07-15 14:49:17.871958] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634982 ] 00:07:01.989 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.989 [2024-07-15 14:49:17.944041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.989 [2024-07-15 14:49:18.011381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.989 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.372 ====================================== 00:07:03.372 busy:2402154030 (cyc) 00:07:03.372 total_run_count: 3803000 00:07:03.372 tsc_hz: 2400000000 (cyc) 00:07:03.372 ====================================== 00:07:03.372 poller_cost: 631 (cyc), 262 (nsec) 00:07:03.372 00:07:03.372 real 0m1.216s 00:07:03.372 user 0m1.140s 00:07:03.372 sys 0m0.071s 00:07:03.372 14:49:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.372 14:49:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 ************************************ 00:07:03.372 END TEST thread_poller_perf 00:07:03.372 ************************************ 00:07:03.372 14:49:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:03.372 14:49:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.372 00:07:03.372 real 0m2.697s 00:07:03.372 user 0m2.385s 00:07:03.372 sys 0m0.320s 00:07:03.372 14:49:19 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.372 14:49:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 ************************************ 00:07:03.372 END TEST thread 00:07:03.372 ************************************ 00:07:03.372 14:49:19 -- common/autotest_common.sh@1142 -- # return 0 00:07:03.372 14:49:19 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:03.372 14:49:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.372 14:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.372 14:49:19 -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 ************************************ 00:07:03.372 START TEST accel 00:07:03.372 ************************************ 00:07:03.372 14:49:19 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:03.372 * Looking for test storage... 00:07:03.372 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:03.372 14:49:19 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:03.372 14:49:19 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:03.372 14:49:19 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.372 14:49:19 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1635329 00:07:03.372 14:49:19 accel -- accel/accel.sh@63 -- # waitforlisten 1635329 00:07:03.372 14:49:19 accel -- common/autotest_common.sh@829 -- # '[' -z 1635329 ']' 00:07:03.372 14:49:19 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.372 14:49:19 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.372 14:49:19 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:03.372 14:49:19 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.372 14:49:19 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.372 14:49:19 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:03.373 14:49:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.373 14:49:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.373 14:49:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.373 14:49:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.373 14:49:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.373 14:49:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.373 14:49:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:03.373 14:49:19 accel -- accel/accel.sh@41 -- # jq -r . 00:07:03.373 [2024-07-15 14:49:19.337266] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:03.373 [2024-07-15 14:49:19.337319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635329 ] 00:07:03.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.373 [2024-07-15 14:49:19.404672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.633 [2024-07-15 14:49:19.469880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@862 -- # return 0 00:07:04.206 14:49:20 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:04.206 14:49:20 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:04.206 14:49:20 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:04.206 14:49:20 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:04.206 14:49:20 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:04.206 14:49:20 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:04.206 14:49:20 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.206 14:49:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.206 14:49:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.206 14:49:20 accel -- accel/accel.sh@75 -- # killprocess 1635329 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@948 -- # '[' -z 1635329 ']' 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@952 -- # kill -0 1635329 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@953 -- # uname 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1635329 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1635329' 00:07:04.206 killing process with pid 1635329 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@967 -- # kill 1635329 00:07:04.206 14:49:20 accel -- common/autotest_common.sh@972 -- # wait 1635329 00:07:04.468 14:49:20 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:04.468 14:49:20 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:04.468 14:49:20 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.468 14:49:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.468 14:49:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.468 14:49:20 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:04.468 14:49:20 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:04.468 14:49:20 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.468 14:49:20 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:04.468 14:49:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.468 14:49:20 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:04.468 14:49:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.468 14:49:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.468 14:49:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.728 ************************************ 00:07:04.728 START TEST accel_missing_filename 00:07:04.728 ************************************ 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.728 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:04.728 14:49:20 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:04.728 [2024-07-15 14:49:20.596461] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:04.728 [2024-07-15 14:49:20.596561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635497 ] 00:07:04.728 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.728 [2024-07-15 14:49:20.668214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.728 [2024-07-15 14:49:20.741044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.728 [2024-07-15 14:49:20.773244] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.990 [2024-07-15 14:49:20.810297] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:04.990 A filename is required. 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.990 00:07:04.990 real 0m0.299s 00:07:04.990 user 0m0.225s 00:07:04.990 sys 0m0.117s 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.990 14:49:20 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:04.990 ************************************ 00:07:04.990 END TEST accel_missing_filename 00:07:04.990 ************************************ 00:07:04.990 14:49:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.990 14:49:20 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.990 14:49:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:04.990 14:49:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.990 14:49:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.990 ************************************ 00:07:04.990 START TEST accel_compress_verify 00:07:04.990 ************************************ 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.990 14:49:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:04.990 14:49:20 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:04.990 [2024-07-15 14:49:20.967763] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:04.990 [2024-07-15 14:49:20.967832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635701 ] 00:07:04.990 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.990 [2024-07-15 14:49:21.035799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.251 [2024-07-15 14:49:21.101374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.251 [2024-07-15 14:49:21.133283] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.251 [2024-07-15 14:49:21.170217] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:05.251 00:07:05.251 Compression does not support the verify option, aborting. 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.251 00:07:05.251 real 0m0.287s 00:07:05.251 user 0m0.219s 00:07:05.251 sys 0m0.109s 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.251 14:49:21 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.251 ************************************ 00:07:05.251 END TEST accel_compress_verify 00:07:05.251 ************************************ 00:07:05.251 14:49:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.251 14:49:21 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:05.251 14:49:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.251 14:49:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.251 14:49:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.251 ************************************ 00:07:05.251 START TEST accel_wrong_workload 00:07:05.251 ************************************ 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.251 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:05.251 14:49:21 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:05.513 Unsupported workload type: foobar 00:07:05.513 [2024-07-15 14:49:21.330532] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:05.513 accel_perf options: 00:07:05.513 [-h help message] 00:07:05.513 [-q queue depth per core] 00:07:05.513 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:05.513 [-T number of threads per core 00:07:05.513 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:05.513 [-t time in seconds] 00:07:05.513 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:05.513 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:05.513 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:05.513 [-l for compress/decompress workloads, name of uncompressed input file 00:07:05.513 [-S for crc32c workload, use this seed value (default 0) 00:07:05.513 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:05.513 [-f for fill workload, use this BYTE value (default 255) 00:07:05.513 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:05.513 [-y verify result if this switch is on] 00:07:05.513 [-a tasks to allocate per core (default: same value as -q)] 00:07:05.513 Can be used to spread operations across a wider range of memory. 00:07:05.513 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:05.513 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.513 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.513 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.513 00:07:05.513 real 0m0.037s 00:07:05.513 user 0m0.024s 00:07:05.513 sys 0m0.013s 00:07:05.513 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.513 14:49:21 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:05.513 ************************************ 00:07:05.513 END TEST accel_wrong_workload 00:07:05.513 ************************************ 00:07:05.513 Error: writing output failed: Broken pipe 00:07:05.513 14:49:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.513 14:49:21 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:05.513 14:49:21 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:05.513 14:49:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.513 14:49:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.513 ************************************ 00:07:05.513 START TEST accel_negative_buffers 00:07:05.513 ************************************ 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:05.513 14:49:21 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:05.513 -x option must be non-negative. 00:07:05.513 [2024-07-15 14:49:21.439720] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:05.513 accel_perf options: 00:07:05.513 [-h help message] 00:07:05.513 [-q queue depth per core] 00:07:05.513 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:05.513 [-T number of threads per core 00:07:05.513 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:05.513 [-t time in seconds] 00:07:05.513 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:05.513 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:05.513 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:05.513 [-l for compress/decompress workloads, name of uncompressed input file 00:07:05.513 [-S for crc32c workload, use this seed value (default 0) 00:07:05.513 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:05.513 [-f for fill workload, use this BYTE value (default 255) 00:07:05.513 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:05.513 [-y verify result if this switch is on] 00:07:05.513 [-a tasks to allocate per core (default: same value as -q)] 00:07:05.513 Can be used to spread operations across a wider range of memory. 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.513 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.513 00:07:05.513 real 0m0.036s 00:07:05.514 user 0m0.018s 00:07:05.514 sys 0m0.017s 00:07:05.514 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.514 14:49:21 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:05.514 ************************************ 00:07:05.514 END TEST accel_negative_buffers 00:07:05.514 ************************************ 00:07:05.514 Error: writing output failed: Broken pipe 00:07:05.514 14:49:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.514 14:49:21 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:05.514 14:49:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:05.514 14:49:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.514 14:49:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.514 ************************************ 00:07:05.514 START TEST accel_crc32c 00:07:05.514 ************************************ 00:07:05.514 14:49:21 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:05.514 14:49:21 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:05.514 [2024-07-15 14:49:21.555146] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:05.514 [2024-07-15 14:49:21.555276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635836 ] 00:07:05.776 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.776 [2024-07-15 14:49:21.632499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.776 [2024-07-15 14:49:21.696170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.776 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.777 14:49:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:07.164 14:49:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.164 00:07:07.164 real 0m1.301s 00:07:07.164 user 0m1.202s 00:07:07.164 sys 0m0.111s 00:07:07.164 14:49:22 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.164 14:49:22 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:07.164 ************************************ 00:07:07.164 END TEST accel_crc32c 00:07:07.164 ************************************ 00:07:07.164 14:49:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.164 14:49:22 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:07.164 14:49:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:07.164 14:49:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.164 14:49:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.164 ************************************ 00:07:07.164 START TEST accel_crc32c_C2 00:07:07.164 ************************************ 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.164 14:49:22 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:07.164 [2024-07-15 14:49:22.927559] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:07.164 [2024-07-15 14:49:22.927653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636185 ] 00:07:07.164 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.164 [2024-07-15 14:49:22.994698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.164 [2024-07-15 14:49:23.058673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.164 14:49:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.550 00:07:08.550 real 0m1.289s 00:07:08.550 user 0m1.201s 00:07:08.550 sys 0m0.101s 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.550 14:49:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:08.550 ************************************ 00:07:08.550 END TEST accel_crc32c_C2 00:07:08.550 ************************************ 00:07:08.550 14:49:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.550 14:49:24 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:08.550 14:49:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.550 14:49:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.550 14:49:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.550 ************************************ 00:07:08.550 START TEST accel_copy 00:07:08.550 ************************************ 00:07:08.550 14:49:24 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:08.550 [2024-07-15 14:49:24.291536] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:08.550 [2024-07-15 14:49:24.291633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636443 ] 00:07:08.550 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.550 [2024-07-15 14:49:24.362243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.550 [2024-07-15 14:49:24.434174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.550 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.551 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.551 14:49:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.551 14:49:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.551 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.551 14:49:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:09.937 14:49:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.937 00:07:09.937 real 0m1.302s 00:07:09.937 user 0m1.195s 00:07:09.937 sys 0m0.118s 00:07:09.937 14:49:25 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.937 14:49:25 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.937 ************************************ 00:07:09.937 END TEST accel_copy 00:07:09.937 ************************************ 00:07:09.937 14:49:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.937 14:49:25 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.937 14:49:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:09.937 14:49:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.937 14:49:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.937 ************************************ 00:07:09.937 START TEST accel_fill 00:07:09.937 ************************************ 00:07:09.937 14:49:25 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:09.937 [2024-07-15 14:49:25.664977] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:09.937 [2024-07-15 14:49:25.665043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636638 ] 00:07:09.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.937 [2024-07-15 14:49:25.734753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.937 [2024-07-15 14:49:25.804999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:09.937 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.938 14:49:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:10.881 14:49:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.881 00:07:10.881 real 0m1.297s 00:07:10.881 user 0m1.200s 00:07:10.881 sys 0m0.109s 00:07:10.881 14:49:26 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.881 14:49:26 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:10.881 ************************************ 00:07:10.881 END TEST accel_fill 00:07:10.881 ************************************ 00:07:11.143 14:49:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.143 14:49:26 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:11.143 14:49:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.143 14:49:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.143 14:49:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.143 ************************************ 00:07:11.143 START TEST accel_copy_crc32c 00:07:11.143 ************************************ 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:11.143 [2024-07-15 14:49:27.037092] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:11.143 [2024-07-15 14:49:27.037155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636924 ] 00:07:11.143 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.143 [2024-07-15 14:49:27.104388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.143 [2024-07-15 14:49:27.171030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.143 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.405 14:49:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.351 00:07:12.351 real 0m1.292s 00:07:12.351 user 0m1.205s 00:07:12.351 sys 0m0.100s 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.351 14:49:28 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:12.351 ************************************ 00:07:12.351 END TEST accel_copy_crc32c 00:07:12.351 ************************************ 00:07:12.351 14:49:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.351 14:49:28 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.351 14:49:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.351 14:49:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.351 14:49:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.351 ************************************ 00:07:12.351 START TEST accel_copy_crc32c_C2 00:07:12.351 ************************************ 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:12.351 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:12.351 [2024-07-15 14:49:28.402268] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:12.351 [2024-07-15 14:49:28.402351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637271 ] 00:07:12.612 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.612 [2024-07-15 14:49:28.470850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.612 [2024-07-15 14:49:28.537386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.612 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.613 14:49:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.000 00:07:14.000 real 0m1.292s 00:07:14.000 user 0m1.195s 00:07:14.000 sys 0m0.110s 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.000 14:49:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:14.000 ************************************ 00:07:14.000 END TEST accel_copy_crc32c_C2 00:07:14.000 ************************************ 00:07:14.000 14:49:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.000 14:49:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:14.000 14:49:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.000 14:49:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.000 14:49:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.000 ************************************ 00:07:14.000 START TEST accel_dualcast 00:07:14.000 ************************************ 00:07:14.000 14:49:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:14.000 [2024-07-15 14:49:29.768908] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:14.000 [2024-07-15 14:49:29.769003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637626 ] 00:07:14.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.000 [2024-07-15 14:49:29.838088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.000 [2024-07-15 14:49:29.906340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.000 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.001 14:49:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:15.386 14:49:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.386 00:07:15.386 real 0m1.297s 00:07:15.386 user 0m1.201s 00:07:15.386 sys 0m0.107s 00:07:15.386 14:49:31 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.386 14:49:31 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 ************************************ 00:07:15.386 END TEST accel_dualcast 00:07:15.386 ************************************ 00:07:15.386 14:49:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.386 14:49:31 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:15.386 14:49:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:15.386 14:49:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.386 14:49:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.386 ************************************ 00:07:15.386 START TEST accel_compare 00:07:15.386 ************************************ 00:07:15.386 14:49:31 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:15.386 [2024-07-15 14:49:31.140985] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:15.386 [2024-07-15 14:49:31.141050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637975 ] 00:07:15.386 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.386 [2024-07-15 14:49:31.208366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.386 [2024-07-15 14:49:31.273544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.386 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.387 14:49:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:16.775 14:49:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.775 00:07:16.775 real 0m1.290s 00:07:16.775 user 0m1.201s 00:07:16.775 sys 0m0.101s 00:07:16.775 14:49:32 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.775 14:49:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:16.775 ************************************ 00:07:16.775 END TEST accel_compare 00:07:16.775 ************************************ 00:07:16.775 14:49:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.775 14:49:32 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:16.775 14:49:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.775 14:49:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.775 14:49:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.775 ************************************ 00:07:16.775 START TEST accel_xor 00:07:16.775 ************************************ 00:07:16.775 14:49:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:16.775 [2024-07-15 14:49:32.505597] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:16.775 [2024-07-15 14:49:32.505693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638162 ] 00:07:16.775 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.775 [2024-07-15 14:49:32.577354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.775 [2024-07-15 14:49:32.648287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.775 14:49:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.715 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.976 00:07:17.976 real 0m1.301s 00:07:17.976 user 0m1.202s 00:07:17.976 sys 0m0.111s 00:07:17.976 14:49:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.976 14:49:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:17.976 ************************************ 00:07:17.976 END TEST accel_xor 00:07:17.976 ************************************ 00:07:17.976 14:49:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.976 14:49:33 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:17.976 14:49:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:17.976 14:49:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.976 14:49:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.976 ************************************ 00:07:17.976 START TEST accel_xor 00:07:17.976 ************************************ 00:07:17.976 14:49:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:17.976 14:49:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:17.976 [2024-07-15 14:49:33.882698] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:17.976 [2024-07-15 14:49:33.882795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638377 ] 00:07:17.976 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.976 [2024-07-15 14:49:33.955996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.976 [2024-07-15 14:49:34.022226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.237 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.238 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.238 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.238 14:49:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.238 14:49:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.238 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.238 14:49:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:19.177 14:49:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.177 00:07:19.177 real 0m1.298s 00:07:19.177 user 0m1.198s 00:07:19.177 sys 0m0.112s 00:07:19.177 14:49:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.177 14:49:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:19.177 ************************************ 00:07:19.177 END TEST accel_xor 00:07:19.177 ************************************ 00:07:19.177 14:49:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.177 14:49:35 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:19.177 14:49:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:19.177 14:49:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.177 14:49:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.177 ************************************ 00:07:19.177 START TEST accel_dif_verify 00:07:19.177 ************************************ 00:07:19.177 14:49:35 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.177 14:49:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.437 [2024-07-15 14:49:35.254629] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:19.437 [2024-07-15 14:49:35.254700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638714 ] 00:07:19.437 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.437 [2024-07-15 14:49:35.322405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.437 [2024-07-15 14:49:35.389065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.437 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.438 14:49:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.913 14:49:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.913 14:49:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.913 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:20.914 14:49:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.914 00:07:20.914 real 0m1.294s 00:07:20.914 user 0m1.198s 00:07:20.914 sys 0m0.108s 00:07:20.914 14:49:36 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.914 14:49:36 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.914 ************************************ 00:07:20.914 END TEST accel_dif_verify 00:07:20.914 ************************************ 00:07:20.914 14:49:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.914 14:49:36 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.914 14:49:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:20.914 14:49:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.914 14:49:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.914 ************************************ 00:07:20.914 START TEST accel_dif_generate 00:07:20.914 ************************************ 00:07:20.914 14:49:36 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:20.914 [2024-07-15 14:49:36.622971] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:20.914 [2024-07-15 14:49:36.623062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639066 ] 00:07:20.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.914 [2024-07-15 14:49:36.691608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.914 [2024-07-15 14:49:36.756080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.914 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.915 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.915 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.915 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.915 14:49:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.915 14:49:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.915 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.915 14:49:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:21.856 14:49:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.856 00:07:21.856 real 0m1.293s 00:07:21.856 user 0m1.195s 00:07:21.856 sys 0m0.111s 00:07:21.856 14:49:37 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.856 14:49:37 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:21.856 ************************************ 00:07:21.856 END TEST accel_dif_generate 00:07:21.856 ************************************ 00:07:22.117 14:49:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.117 14:49:37 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:22.117 14:49:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:22.117 14:49:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.117 14:49:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.117 ************************************ 00:07:22.117 START TEST accel_dif_generate_copy 00:07:22.117 ************************************ 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:22.117 14:49:37 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:22.117 [2024-07-15 14:49:37.987967] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:22.117 [2024-07-15 14:49:37.988029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639423 ] 00:07:22.117 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.117 [2024-07-15 14:49:38.055347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.117 [2024-07-15 14:49:38.119919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.117 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 14:49:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.501 00:07:23.501 real 0m1.290s 00:07:23.501 user 0m1.193s 00:07:23.501 sys 0m0.108s 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.501 14:49:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.501 ************************************ 00:07:23.501 END TEST accel_dif_generate_copy 00:07:23.501 ************************************ 00:07:23.501 14:49:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.501 14:49:39 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:23.501 14:49:39 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.501 14:49:39 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:23.501 14:49:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.501 14:49:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.501 ************************************ 00:07:23.501 START TEST accel_comp 00:07:23.501 ************************************ 00:07:23.501 14:49:39 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:23.501 [2024-07-15 14:49:39.353824] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:23.501 [2024-07-15 14:49:39.353918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639676 ] 00:07:23.501 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.501 [2024-07-15 14:49:39.424834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.501 [2024-07-15 14:49:39.496643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.501 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.502 14:49:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.883 14:49:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:24.884 14:49:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.884 00:07:24.884 real 0m1.305s 00:07:24.884 user 0m1.208s 00:07:24.884 sys 0m0.110s 00:07:24.884 14:49:40 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.884 14:49:40 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:24.884 ************************************ 00:07:24.884 END TEST accel_comp 00:07:24.884 ************************************ 00:07:24.884 14:49:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.884 14:49:40 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.884 14:49:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:24.884 14:49:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.884 14:49:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.884 ************************************ 00:07:24.884 START TEST accel_decomp 00:07:24.884 ************************************ 00:07:24.884 14:49:40 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:24.884 [2024-07-15 14:49:40.733352] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:24.884 [2024-07-15 14:49:40.733427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639869 ] 00:07:24.884 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.884 [2024-07-15 14:49:40.802907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.884 [2024-07-15 14:49:40.873187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:24.884 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.885 14:49:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.267 14:49:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.267 00:07:26.267 real 0m1.300s 00:07:26.267 user 0m1.200s 00:07:26.267 sys 0m0.113s 00:07:26.267 14:49:42 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.267 14:49:42 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:26.267 ************************************ 00:07:26.267 END TEST accel_decomp 00:07:26.267 ************************************ 00:07:26.267 14:49:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.267 14:49:42 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.267 14:49:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:26.267 14:49:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.267 14:49:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.267 ************************************ 00:07:26.267 START TEST accel_decomp_full 00:07:26.267 ************************************ 00:07:26.267 14:49:42 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:26.267 [2024-07-15 14:49:42.107325] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:26.267 [2024-07-15 14:49:42.107408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640162 ] 00:07:26.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.267 [2024-07-15 14:49:42.175548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.267 [2024-07-15 14:49:42.242020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.267 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.268 14:49:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.653 14:49:43 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.653 00:07:27.653 real 0m1.305s 00:07:27.653 user 0m1.216s 00:07:27.653 sys 0m0.102s 00:07:27.653 14:49:43 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.653 14:49:43 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:27.653 ************************************ 00:07:27.653 END TEST accel_decomp_full 00:07:27.653 ************************************ 00:07:27.653 14:49:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.653 14:49:43 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.653 14:49:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:27.653 14:49:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.653 14:49:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.653 ************************************ 00:07:27.653 START TEST accel_decomp_mcore 00:07:27.653 ************************************ 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:27.653 [2024-07-15 14:49:43.486704] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:27.653 [2024-07-15 14:49:43.486799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640509 ] 00:07:27.653 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.653 [2024-07-15 14:49:43.555642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.653 [2024-07-15 14:49:43.624836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.653 [2024-07-15 14:49:43.624953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.653 [2024-07-15 14:49:43.625107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.653 [2024-07-15 14:49:43.625108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.653 14:49:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.042 00:07:29.042 real 0m1.308s 00:07:29.042 user 0m4.435s 00:07:29.042 sys 0m0.120s 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.042 14:49:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.042 ************************************ 00:07:29.042 END TEST accel_decomp_mcore 00:07:29.042 ************************************ 00:07:29.042 14:49:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.042 14:49:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.042 14:49:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:29.042 14:49:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.042 14:49:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.042 ************************************ 00:07:29.042 START TEST accel_decomp_full_mcore 00:07:29.042 ************************************ 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:29.042 14:49:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:29.042 [2024-07-15 14:49:44.867837] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:29.042 [2024-07-15 14:49:44.867942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640867 ] 00:07:29.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.042 [2024-07-15 14:49:44.943657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.042 [2024-07-15 14:49:45.016921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.042 [2024-07-15 14:49:45.017038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.042 [2024-07-15 14:49:45.017193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.042 [2024-07-15 14:49:45.017193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.042 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.042 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.043 14:49:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.429 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.430 00:07:30.430 real 0m1.333s 00:07:30.430 user 0m4.500s 00:07:30.430 sys 0m0.119s 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.430 14:49:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:30.430 ************************************ 00:07:30.430 END TEST accel_decomp_full_mcore 00:07:30.430 ************************************ 00:07:30.430 14:49:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.430 14:49:46 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.430 14:49:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:30.430 14:49:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.430 14:49:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.430 ************************************ 00:07:30.430 START TEST accel_decomp_mthread 00:07:30.430 ************************************ 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:30.430 [2024-07-15 14:49:46.272802] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:30.430 [2024-07-15 14:49:46.272868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641217 ] 00:07:30.430 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.430 [2024-07-15 14:49:46.341867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.430 [2024-07-15 14:49:46.412513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.430 14:49:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.816 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.817 00:07:31.817 real 0m1.305s 00:07:31.817 user 0m1.211s 00:07:31.817 sys 0m0.106s 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.817 14:49:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:31.817 ************************************ 00:07:31.817 END TEST accel_decomp_mthread 00:07:31.817 ************************************ 00:07:31.817 14:49:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.817 14:49:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.817 14:49:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:31.817 14:49:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.817 14:49:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.817 ************************************ 00:07:31.817 START TEST accel_decomp_full_mthread 00:07:31.817 ************************************ 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:31.817 [2024-07-15 14:49:47.652552] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:31.817 [2024-07-15 14:49:47.652648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641423 ] 00:07:31.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.817 [2024-07-15 14:49:47.722677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.817 [2024-07-15 14:49:47.792343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.817 14:49:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.199 00:07:33.199 real 0m1.330s 00:07:33.199 user 0m1.236s 00:07:33.199 sys 0m0.108s 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.199 14:49:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:33.199 ************************************ 00:07:33.199 END TEST accel_decomp_full_mthread 00:07:33.199 ************************************ 00:07:33.199 14:49:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.199 14:49:48 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:33.199 14:49:48 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:33.199 14:49:48 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:33.199 14:49:48 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:33.199 14:49:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.199 14:49:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.199 14:49:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.199 14:49:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.199 14:49:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.199 14:49:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.199 14:49:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.199 14:49:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:33.199 14:49:48 accel -- accel/accel.sh@41 -- # jq -r . 00:07:33.199 ************************************ 00:07:33.199 START TEST accel_dif_functional_tests 00:07:33.199 ************************************ 00:07:33.199 14:49:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:33.199 [2024-07-15 14:49:49.072576] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:33.199 [2024-07-15 14:49:49.072638] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641634 ] 00:07:33.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.199 [2024-07-15 14:49:49.142999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.199 [2024-07-15 14:49:49.215076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.199 [2024-07-15 14:49:49.215209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.199 [2024-07-15 14:49:49.215212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.461 00:07:33.461 00:07:33.461 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.461 http://cunit.sourceforge.net/ 00:07:33.461 00:07:33.461 00:07:33.461 Suite: accel_dif 00:07:33.461 Test: verify: DIF generated, GUARD check ...passed 00:07:33.461 Test: verify: DIF generated, APPTAG check ...passed 00:07:33.461 Test: verify: DIF generated, REFTAG check ...passed 00:07:33.461 Test: verify: DIF not generated, GUARD check ...[2024-07-15 14:49:49.270695] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:33.461 passed 00:07:33.461 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 14:49:49.270738] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:33.461 passed 00:07:33.461 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 14:49:49.270760] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:33.461 passed 00:07:33.461 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:33.461 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 14:49:49.270813] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:33.461 passed 00:07:33.461 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:33.461 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:33.461 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:33.461 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 14:49:49.270928] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:33.461 passed 00:07:33.461 Test: verify copy: DIF generated, GUARD check ...passed 00:07:33.461 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:33.461 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:33.461 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 14:49:49.271046] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:33.461 passed 00:07:33.461 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 14:49:49.271068] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:33.461 passed 00:07:33.461 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 14:49:49.271089] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:33.461 passed 00:07:33.461 Test: generate copy: DIF generated, GUARD check ...passed 00:07:33.461 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:33.461 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:33.461 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:33.461 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:33.461 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:33.461 Test: generate copy: iovecs-len validate ...[2024-07-15 14:49:49.271280] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:33.461 passed 00:07:33.461 Test: generate copy: buffer alignment validate ...passed 00:07:33.461 00:07:33.461 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.461 suites 1 1 n/a 0 0 00:07:33.461 tests 26 26 26 0 0 00:07:33.461 asserts 115 115 115 0 n/a 00:07:33.461 00:07:33.461 Elapsed time = 0.002 seconds 00:07:33.461 00:07:33.461 real 0m0.363s 00:07:33.461 user 0m0.486s 00:07:33.461 sys 0m0.142s 00:07:33.461 14:49:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.461 14:49:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:33.461 ************************************ 00:07:33.461 END TEST accel_dif_functional_tests 00:07:33.461 ************************************ 00:07:33.461 14:49:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.461 00:07:33.461 real 0m30.250s 00:07:33.461 user 0m33.712s 00:07:33.461 sys 0m4.295s 00:07:33.461 14:49:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.461 14:49:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.461 ************************************ 00:07:33.461 END TEST accel 00:07:33.461 ************************************ 00:07:33.461 14:49:49 -- common/autotest_common.sh@1142 -- # return 0 00:07:33.461 14:49:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:33.461 14:49:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.461 14:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.461 14:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:33.461 ************************************ 00:07:33.461 START TEST accel_rpc 00:07:33.461 ************************************ 00:07:33.461 14:49:49 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:33.721 * Looking for test storage... 00:07:33.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:33.721 14:49:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:33.721 14:49:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1641988 00:07:33.721 14:49:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1641988 00:07:33.721 14:49:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:33.721 14:49:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1641988 ']' 00:07:33.721 14:49:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.721 14:49:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.721 14:49:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.721 14:49:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.721 14:49:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.721 [2024-07-15 14:49:49.668180] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:33.721 [2024-07-15 14:49:49.668265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641988 ] 00:07:33.721 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.721 [2024-07-15 14:49:49.738828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.981 [2024-07-15 14:49:49.813145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.551 14:49:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.551 14:49:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:34.551 14:49:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:34.551 14:49:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:34.551 14:49:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:34.551 14:49:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:34.551 14:49:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:34.551 14:49:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.551 14:49:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.551 14:49:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.551 ************************************ 00:07:34.551 START TEST accel_assign_opcode 00:07:34.551 ************************************ 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.551 [2024-07-15 14:49:50.455041] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.551 [2024-07-15 14:49:50.467068] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.551 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.812 software 00:07:34.812 00:07:34.812 real 0m0.216s 00:07:34.812 user 0m0.052s 00:07:34.812 sys 0m0.009s 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.812 14:49:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.812 ************************************ 00:07:34.812 END TEST accel_assign_opcode 00:07:34.812 ************************************ 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:34.812 14:49:50 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1641988 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1641988 ']' 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1641988 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1641988 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1641988' 00:07:34.812 killing process with pid 1641988 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@967 -- # kill 1641988 00:07:34.812 14:49:50 accel_rpc -- common/autotest_common.sh@972 -- # wait 1641988 00:07:35.073 00:07:35.073 real 0m1.467s 00:07:35.073 user 0m1.517s 00:07:35.073 sys 0m0.426s 00:07:35.073 14:49:50 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.073 14:49:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.073 ************************************ 00:07:35.073 END TEST accel_rpc 00:07:35.073 ************************************ 00:07:35.073 14:49:51 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.073 14:49:51 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:35.073 14:49:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.073 14:49:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.073 14:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:35.073 ************************************ 00:07:35.073 START TEST app_cmdline 00:07:35.073 ************************************ 00:07:35.073 14:49:51 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:35.073 * Looking for test storage... 00:07:35.332 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:35.333 14:49:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:35.333 14:49:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1642335 00:07:35.333 14:49:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1642335 00:07:35.333 14:49:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:35.333 14:49:51 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1642335 ']' 00:07:35.333 14:49:51 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.333 14:49:51 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.333 14:49:51 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.333 14:49:51 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.333 14:49:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.333 [2024-07-15 14:49:51.197906] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:35.333 [2024-07-15 14:49:51.197973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642335 ] 00:07:35.333 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.333 [2024-07-15 14:49:51.264472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.333 [2024-07-15 14:49:51.329992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.903 14:49:51 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.903 14:49:51 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:35.903 14:49:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:36.164 { 00:07:36.164 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:07:36.164 "fields": { 00:07:36.164 "major": 24, 00:07:36.164 "minor": 9, 00:07:36.164 "patch": 0, 00:07:36.164 "suffix": "-pre", 00:07:36.164 "commit": "2728651ee" 00:07:36.164 } 00:07:36.164 } 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:36.164 14:49:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:36.164 14:49:52 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.425 request: 00:07:36.426 { 00:07:36.426 "method": "env_dpdk_get_mem_stats", 00:07:36.426 "req_id": 1 00:07:36.426 } 00:07:36.426 Got JSON-RPC error response 00:07:36.426 response: 00:07:36.426 { 00:07:36.426 "code": -32601, 00:07:36.426 "message": "Method not found" 00:07:36.426 } 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.426 14:49:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1642335 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1642335 ']' 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1642335 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1642335 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1642335' 00:07:36.426 killing process with pid 1642335 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@967 -- # kill 1642335 00:07:36.426 14:49:52 app_cmdline -- common/autotest_common.sh@972 -- # wait 1642335 00:07:36.685 00:07:36.685 real 0m1.533s 00:07:36.685 user 0m1.815s 00:07:36.685 sys 0m0.406s 00:07:36.685 14:49:52 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.685 14:49:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.685 ************************************ 00:07:36.685 END TEST app_cmdline 00:07:36.685 ************************************ 00:07:36.685 14:49:52 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.685 14:49:52 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:36.685 14:49:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.685 14:49:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.685 14:49:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.685 ************************************ 00:07:36.685 START TEST version 00:07:36.685 ************************************ 00:07:36.685 14:49:52 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:36.685 * Looking for test storage... 00:07:36.685 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:36.946 14:49:52 version -- app/version.sh@17 -- # get_header_version major 00:07:36.946 14:49:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # cut -f2 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.946 14:49:52 version -- app/version.sh@17 -- # major=24 00:07:36.946 14:49:52 version -- app/version.sh@18 -- # get_header_version minor 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.946 14:49:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # cut -f2 00:07:36.946 14:49:52 version -- app/version.sh@18 -- # minor=9 00:07:36.946 14:49:52 version -- app/version.sh@19 -- # get_header_version patch 00:07:36.946 14:49:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # cut -f2 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.946 14:49:52 version -- app/version.sh@19 -- # patch=0 00:07:36.946 14:49:52 version -- app/version.sh@20 -- # get_header_version suffix 00:07:36.946 14:49:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # cut -f2 00:07:36.946 14:49:52 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.946 14:49:52 version -- app/version.sh@20 -- # suffix=-pre 00:07:36.946 14:49:52 version -- app/version.sh@22 -- # version=24.9 00:07:36.946 14:49:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:36.946 14:49:52 version -- app/version.sh@28 -- # version=24.9rc0 00:07:36.946 14:49:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:36.946 14:49:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:36.946 14:49:52 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:36.946 14:49:52 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:36.946 00:07:36.946 real 0m0.172s 00:07:36.946 user 0m0.092s 00:07:36.946 sys 0m0.117s 00:07:36.946 14:49:52 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.946 14:49:52 version -- common/autotest_common.sh@10 -- # set +x 00:07:36.946 ************************************ 00:07:36.946 END TEST version 00:07:36.946 ************************************ 00:07:36.946 14:49:52 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.946 14:49:52 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:36.946 14:49:52 -- spdk/autotest.sh@198 -- # uname -s 00:07:36.946 14:49:52 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:36.946 14:49:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:36.946 14:49:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:36.946 14:49:52 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:36.946 14:49:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:36.946 14:49:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:36.946 14:49:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.946 14:49:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.946 14:49:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:36.946 14:49:52 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:36.946 14:49:52 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:36.946 14:49:52 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:36.946 14:49:52 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:07:36.946 14:49:52 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:36.946 14:49:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:36.946 14:49:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.946 14:49:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.946 ************************************ 00:07:36.946 START TEST nvmf_rdma 00:07:36.946 ************************************ 00:07:36.946 14:49:52 nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:37.207 * Looking for test storage... 00:07:37.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.207 14:49:53 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:37.207 14:49:53 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.207 14:49:53 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.207 14:49:53 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.207 14:49:53 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.207 14:49:53 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.207 14:49:53 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.207 14:49:53 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:37.208 14:49:53 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:37.208 14:49:53 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.208 14:49:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:37.208 14:49:53 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:37.208 14:49:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.208 14:49:53 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.208 14:49:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:37.208 ************************************ 00:07:37.208 START TEST nvmf_example 00:07:37.208 ************************************ 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:37.208 * Looking for test storage... 00:07:37.208 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.208 14:49:53 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:45.357 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:07:45.358 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:07:45.358 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:07:45.358 Found net devices under 0000:98:00.0: mlx_0_0 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:07:45.358 Found net devices under 0000:98:00.1: mlx_0_1 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:45.358 14:50:00 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:45.358 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:45.358 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:45.358 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:07:45.358 altname enp152s0f0np0 00:07:45.358 altname ens817f0np0 00:07:45.358 inet 192.168.100.8/24 scope global mlx_0_0 00:07:45.358 valid_lft forever preferred_lft forever 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:45.359 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:45.359 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:07:45.359 altname enp152s0f1np1 00:07:45.359 altname ens817f1np1 00:07:45.359 inet 192.168.100.9/24 scope global mlx_0_1 00:07:45.359 valid_lft forever preferred_lft forever 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:45.359 192.168.100.9' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:45.359 192.168.100.9' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:45.359 192.168.100.9' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1647000 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1647000 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1647000 ']' 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.359 14:50:01 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:46.298 14:50:02 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:46.558 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.786 Initializing NVMe Controllers 00:07:58.786 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.786 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:58.786 Initialization complete. Launching workers. 00:07:58.786 ======================================================== 00:07:58.786 Latency(us) 00:07:58.786 Device Information : IOPS MiB/s Average min max 00:07:58.786 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 23989.07 93.71 2667.66 680.92 14062.07 00:07:58.786 ======================================================== 00:07:58.786 Total : 23989.07 93.71 2667.66 680.92 14062.07 00:07:58.786 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:58.786 rmmod nvme_rdma 00:07:58.786 rmmod nvme_fabrics 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1647000 ']' 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1647000 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1647000 ']' 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1647000 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1647000 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1647000' 00:07:58.786 killing process with pid 1647000 00:07:58.786 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # kill 1647000 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@972 -- # wait 1647000 00:07:58.787 nvmf threads initialize successfully 00:07:58.787 bdev subsystem init successfully 00:07:58.787 created a nvmf target service 00:07:58.787 create targets's poll groups done 00:07:58.787 all subsystems of target started 00:07:58.787 nvmf target is running 00:07:58.787 all subsystems of target stopped 00:07:58.787 destroy targets's poll groups done 00:07:58.787 destroyed the nvmf target service 00:07:58.787 bdev subsystem finish successfully 00:07:58.787 nvmf threads destroy successfully 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:58.787 00:07:58.787 real 0m20.845s 00:07:58.787 user 0m52.504s 00:07:58.787 sys 0m6.269s 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.787 14:50:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:58.787 ************************************ 00:07:58.787 END TEST nvmf_example 00:07:58.787 ************************************ 00:07:58.787 14:50:13 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:07:58.787 14:50:14 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:58.787 14:50:14 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.787 14:50:14 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.787 14:50:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:58.787 ************************************ 00:07:58.787 START TEST nvmf_filesystem 00:07:58.787 ************************************ 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:58.787 * Looking for test storage... 00:07:58.787 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:58.787 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:58.788 #define SPDK_CONFIG_H 00:07:58.788 #define SPDK_CONFIG_APPS 1 00:07:58.788 #define SPDK_CONFIG_ARCH native 00:07:58.788 #undef SPDK_CONFIG_ASAN 00:07:58.788 #undef SPDK_CONFIG_AVAHI 00:07:58.788 #undef SPDK_CONFIG_CET 00:07:58.788 #define SPDK_CONFIG_COVERAGE 1 00:07:58.788 #define SPDK_CONFIG_CROSS_PREFIX 00:07:58.788 #undef SPDK_CONFIG_CRYPTO 00:07:58.788 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:58.788 #undef SPDK_CONFIG_CUSTOMOCF 00:07:58.788 #undef SPDK_CONFIG_DAOS 00:07:58.788 #define SPDK_CONFIG_DAOS_DIR 00:07:58.788 #define SPDK_CONFIG_DEBUG 1 00:07:58.788 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:58.788 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:58.788 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:58.788 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:58.788 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:58.788 #undef SPDK_CONFIG_DPDK_UADK 00:07:58.788 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:58.788 #define SPDK_CONFIG_EXAMPLES 1 00:07:58.788 #undef SPDK_CONFIG_FC 00:07:58.788 #define SPDK_CONFIG_FC_PATH 00:07:58.788 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:58.788 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:58.788 #undef SPDK_CONFIG_FUSE 00:07:58.788 #undef SPDK_CONFIG_FUZZER 00:07:58.788 #define SPDK_CONFIG_FUZZER_LIB 00:07:58.788 #undef SPDK_CONFIG_GOLANG 00:07:58.788 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:58.788 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:58.788 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:58.788 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:58.788 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:58.788 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:58.788 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:58.788 #define SPDK_CONFIG_IDXD 1 00:07:58.788 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:58.788 #undef SPDK_CONFIG_IPSEC_MB 00:07:58.788 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:58.788 #define SPDK_CONFIG_ISAL 1 00:07:58.788 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:58.788 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:58.788 #define SPDK_CONFIG_LIBDIR 00:07:58.788 #undef SPDK_CONFIG_LTO 00:07:58.788 #define SPDK_CONFIG_MAX_LCORES 128 00:07:58.788 #define SPDK_CONFIG_NVME_CUSE 1 00:07:58.788 #undef SPDK_CONFIG_OCF 00:07:58.788 #define SPDK_CONFIG_OCF_PATH 00:07:58.788 #define SPDK_CONFIG_OPENSSL_PATH 00:07:58.788 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:58.788 #define SPDK_CONFIG_PGO_DIR 00:07:58.788 #undef SPDK_CONFIG_PGO_USE 00:07:58.788 #define SPDK_CONFIG_PREFIX /usr/local 00:07:58.788 #undef SPDK_CONFIG_RAID5F 00:07:58.788 #undef SPDK_CONFIG_RBD 00:07:58.788 #define SPDK_CONFIG_RDMA 1 00:07:58.788 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:58.788 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:58.788 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:58.788 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:58.788 #define SPDK_CONFIG_SHARED 1 00:07:58.788 #undef SPDK_CONFIG_SMA 00:07:58.788 #define SPDK_CONFIG_TESTS 1 00:07:58.788 #undef SPDK_CONFIG_TSAN 00:07:58.788 #define SPDK_CONFIG_UBLK 1 00:07:58.788 #define SPDK_CONFIG_UBSAN 1 00:07:58.788 #undef SPDK_CONFIG_UNIT_TESTS 00:07:58.788 #undef SPDK_CONFIG_URING 00:07:58.788 #define SPDK_CONFIG_URING_PATH 00:07:58.788 #undef SPDK_CONFIG_URING_ZNS 00:07:58.788 #undef SPDK_CONFIG_USDT 00:07:58.788 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:58.788 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:58.788 #undef SPDK_CONFIG_VFIO_USER 00:07:58.788 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:58.788 #define SPDK_CONFIG_VHOST 1 00:07:58.788 #define SPDK_CONFIG_VIRTIO 1 00:07:58.788 #undef SPDK_CONFIG_VTUNE 00:07:58.788 #define SPDK_CONFIG_VTUNE_DIR 00:07:58.788 #define SPDK_CONFIG_WERROR 1 00:07:58.788 #define SPDK_CONFIG_WPDK_DIR 00:07:58.788 #undef SPDK_CONFIG_XNVME 00:07:58.788 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:58.788 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:58.789 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:58.790 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1649640 ]] 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1649640 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.a5mww2 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.a5mww2/tests/target /tmp/spdk.a5mww2 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122757386240 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6613594112 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864245248 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9953280 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683814912 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1675264 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:58.791 * Looking for test storage... 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:58.791 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122757386240 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8828186624 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.792 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.792 14:50:14 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:06.925 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:06.925 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:06.925 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:06.926 Found net devices under 0000:98:00.0: mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:06.926 Found net devices under 0000:98:00.1: mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:06.926 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:06.926 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:06.926 altname enp152s0f0np0 00:08:06.926 altname ens817f0np0 00:08:06.926 inet 192.168.100.8/24 scope global mlx_0_0 00:08:06.926 valid_lft forever preferred_lft forever 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:06.926 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:06.926 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:06.926 altname enp152s0f1np1 00:08:06.926 altname ens817f1np1 00:08:06.926 inet 192.168.100.9/24 scope global mlx_0_1 00:08:06.926 valid_lft forever preferred_lft forever 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:06.926 192.168.100.9' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:06.926 192.168.100.9' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:06.926 192.168.100.9' 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:06.926 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.927 ************************************ 00:08:06.927 START TEST nvmf_filesystem_no_in_capsule 00:08:06.927 ************************************ 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1653959 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1653959 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1653959 ']' 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.927 14:50:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.927 [2024-07-15 14:50:22.568872] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:06.927 [2024-07-15 14:50:22.568920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.927 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.927 [2024-07-15 14:50:22.637132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.927 [2024-07-15 14:50:22.705501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.927 [2024-07-15 14:50:22.705538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.927 [2024-07-15 14:50:22.705546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.927 [2024-07-15 14:50:22.705552] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.927 [2024-07-15 14:50:22.705558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.927 [2024-07-15 14:50:22.705710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.927 [2024-07-15 14:50:22.705830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.927 [2024-07-15 14:50:22.705986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.927 [2024-07-15 14:50:22.705987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.517 [2024-07-15 14:50:23.392906] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:07.517 [2024-07-15 14:50:23.423916] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1475200/0x14796f0) succeed. 00:08:07.517 [2024-07-15 14:50:23.437223] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1476840/0x14bad80) succeed. 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.517 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.778 Malloc1 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.778 [2024-07-15 14:50:23.676528] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.778 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:07.778 { 00:08:07.778 "name": "Malloc1", 00:08:07.778 "aliases": [ 00:08:07.778 "8eae4e84-8a3b-48ca-8ffa-9cb563c037c9" 00:08:07.778 ], 00:08:07.778 "product_name": "Malloc disk", 00:08:07.778 "block_size": 512, 00:08:07.778 "num_blocks": 1048576, 00:08:07.778 "uuid": "8eae4e84-8a3b-48ca-8ffa-9cb563c037c9", 00:08:07.778 "assigned_rate_limits": { 00:08:07.778 "rw_ios_per_sec": 0, 00:08:07.778 "rw_mbytes_per_sec": 0, 00:08:07.778 "r_mbytes_per_sec": 0, 00:08:07.778 "w_mbytes_per_sec": 0 00:08:07.778 }, 00:08:07.778 "claimed": true, 00:08:07.778 "claim_type": "exclusive_write", 00:08:07.778 "zoned": false, 00:08:07.778 "supported_io_types": { 00:08:07.778 "read": true, 00:08:07.778 "write": true, 00:08:07.778 "unmap": true, 00:08:07.778 "flush": true, 00:08:07.778 "reset": true, 00:08:07.778 "nvme_admin": false, 00:08:07.778 "nvme_io": false, 00:08:07.778 "nvme_io_md": false, 00:08:07.778 "write_zeroes": true, 00:08:07.778 "zcopy": true, 00:08:07.778 "get_zone_info": false, 00:08:07.778 "zone_management": false, 00:08:07.779 "zone_append": false, 00:08:07.779 "compare": false, 00:08:07.779 "compare_and_write": false, 00:08:07.779 "abort": true, 00:08:07.779 "seek_hole": false, 00:08:07.779 "seek_data": false, 00:08:07.779 "copy": true, 00:08:07.779 "nvme_iov_md": false 00:08:07.779 }, 00:08:07.779 "memory_domains": [ 00:08:07.779 { 00:08:07.779 "dma_device_id": "system", 00:08:07.779 "dma_device_type": 1 00:08:07.779 }, 00:08:07.779 { 00:08:07.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.779 "dma_device_type": 2 00:08:07.779 } 00:08:07.779 ], 00:08:07.779 "driver_specific": {} 00:08:07.779 } 00:08:07.779 ]' 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:07.779 14:50:23 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:09.219 14:50:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.219 14:50:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:09.219 14:50:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.219 14:50:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:09.219 14:50:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:11.759 14:50:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.701 ************************************ 00:08:12.701 START TEST filesystem_ext4 00:08:12.701 ************************************ 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:12.701 mke2fs 1.46.5 (30-Dec-2021) 00:08:12.701 Discarding device blocks: 0/522240 done 00:08:12.701 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:12.701 Filesystem UUID: db01b24a-382f-4869-8223-3970431fc4be 00:08:12.701 Superblock backups stored on blocks: 00:08:12.701 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:12.701 00:08:12.701 Allocating group tables: 0/64 done 00:08:12.701 Writing inode tables: 0/64 done 00:08:12.701 Creating journal (8192 blocks): done 00:08:12.701 Writing superblocks and filesystem accounting information: 0/64 done 00:08:12.701 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1653959 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.701 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.701 00:08:12.701 real 0m0.136s 00:08:12.701 user 0m0.018s 00:08:12.701 sys 0m0.055s 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:12.702 ************************************ 00:08:12.702 END TEST filesystem_ext4 00:08:12.702 ************************************ 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.702 ************************************ 00:08:12.702 START TEST filesystem_btrfs 00:08:12.702 ************************************ 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:12.702 btrfs-progs v6.6.2 00:08:12.702 See https://btrfs.readthedocs.io for more information. 00:08:12.702 00:08:12.702 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:12.702 NOTE: several default settings have changed in version 5.15, please make sure 00:08:12.702 this does not affect your deployments: 00:08:12.702 - DUP for metadata (-m dup) 00:08:12.702 - enabled no-holes (-O no-holes) 00:08:12.702 - enabled free-space-tree (-R free-space-tree) 00:08:12.702 00:08:12.702 Label: (null) 00:08:12.702 UUID: 4796f5e9-0137-4323-89cc-b5f4a88ae34f 00:08:12.702 Node size: 16384 00:08:12.702 Sector size: 4096 00:08:12.702 Filesystem size: 510.00MiB 00:08:12.702 Block group profiles: 00:08:12.702 Data: single 8.00MiB 00:08:12.702 Metadata: DUP 32.00MiB 00:08:12.702 System: DUP 8.00MiB 00:08:12.702 SSD detected: yes 00:08:12.702 Zoned device: no 00:08:12.702 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:12.702 Runtime features: free-space-tree 00:08:12.702 Checksum: crc32c 00:08:12.702 Number of devices: 1 00:08:12.702 Devices: 00:08:12.702 ID SIZE PATH 00:08:12.702 1 510.00MiB /dev/nvme0n1p1 00:08:12.702 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:12.702 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.963 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1653959 00:08:12.963 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.963 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.963 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.963 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.963 00:08:12.963 real 0m0.142s 00:08:12.963 user 0m0.025s 00:08:12.963 sys 0m0.061s 00:08:12.963 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.963 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.963 ************************************ 00:08:12.963 END TEST filesystem_btrfs 00:08:12.963 ************************************ 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.964 ************************************ 00:08:12.964 START TEST filesystem_xfs 00:08:12.964 ************************************ 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:12.964 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:12.964 = sectsz=512 attr=2, projid32bit=1 00:08:12.964 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:12.964 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:12.964 data = bsize=4096 blocks=130560, imaxpct=25 00:08:12.964 = sunit=0 swidth=0 blks 00:08:12.964 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:12.964 log =internal log bsize=4096 blocks=16384, version=2 00:08:12.964 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:12.964 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:12.964 Discarding blocks...Done. 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1653959 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.964 14:50:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.964 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.964 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.964 00:08:12.964 real 0m0.141s 00:08:12.964 user 0m0.025s 00:08:12.964 sys 0m0.050s 00:08:12.964 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.964 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.964 ************************************ 00:08:12.964 END TEST filesystem_xfs 00:08:12.964 ************************************ 00:08:13.224 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:13.224 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.224 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.224 14:50:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:14.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1653959 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1653959 ']' 00:08:14.608 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1653959 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1653959 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1653959' 00:08:14.609 killing process with pid 1653959 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1653959 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1653959 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:14.609 00:08:14.609 real 0m8.147s 00:08:14.609 user 0m31.861s 00:08:14.609 sys 0m0.929s 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.609 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.609 ************************************ 00:08:14.609 END TEST nvmf_filesystem_no_in_capsule 00:08:14.609 ************************************ 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.869 ************************************ 00:08:14.869 START TEST nvmf_filesystem_in_capsule 00:08:14.869 ************************************ 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1655882 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1655882 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1655882 ']' 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.869 14:50:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.869 [2024-07-15 14:50:30.794989] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:14.869 [2024-07-15 14:50:30.795038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.869 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.869 [2024-07-15 14:50:30.864278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.129 [2024-07-15 14:50:30.939684] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.129 [2024-07-15 14:50:30.939722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.129 [2024-07-15 14:50:30.939730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.129 [2024-07-15 14:50:30.939736] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.129 [2024-07-15 14:50:30.939746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.129 [2024-07-15 14:50:30.939886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.129 [2024-07-15 14:50:30.940006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.129 [2024-07-15 14:50:30.940164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.129 [2024-07-15 14:50:30.940165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.699 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.699 [2024-07-15 14:50:31.658910] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x125c200/0x12606f0) succeed. 00:08:15.699 [2024-07-15 14:50:31.673916] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x125d840/0x12a1d80) succeed. 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.960 Malloc1 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.960 [2024-07-15 14:50:31.904463] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:15.960 { 00:08:15.960 "name": "Malloc1", 00:08:15.960 "aliases": [ 00:08:15.960 "9ecdfc51-74c7-45df-a755-3d5fbc368ccd" 00:08:15.960 ], 00:08:15.960 "product_name": "Malloc disk", 00:08:15.960 "block_size": 512, 00:08:15.960 "num_blocks": 1048576, 00:08:15.960 "uuid": "9ecdfc51-74c7-45df-a755-3d5fbc368ccd", 00:08:15.960 "assigned_rate_limits": { 00:08:15.960 "rw_ios_per_sec": 0, 00:08:15.960 "rw_mbytes_per_sec": 0, 00:08:15.960 "r_mbytes_per_sec": 0, 00:08:15.960 "w_mbytes_per_sec": 0 00:08:15.960 }, 00:08:15.960 "claimed": true, 00:08:15.960 "claim_type": "exclusive_write", 00:08:15.960 "zoned": false, 00:08:15.960 "supported_io_types": { 00:08:15.960 "read": true, 00:08:15.960 "write": true, 00:08:15.960 "unmap": true, 00:08:15.960 "flush": true, 00:08:15.960 "reset": true, 00:08:15.960 "nvme_admin": false, 00:08:15.960 "nvme_io": false, 00:08:15.960 "nvme_io_md": false, 00:08:15.960 "write_zeroes": true, 00:08:15.960 "zcopy": true, 00:08:15.960 "get_zone_info": false, 00:08:15.960 "zone_management": false, 00:08:15.960 "zone_append": false, 00:08:15.960 "compare": false, 00:08:15.960 "compare_and_write": false, 00:08:15.960 "abort": true, 00:08:15.960 "seek_hole": false, 00:08:15.960 "seek_data": false, 00:08:15.960 "copy": true, 00:08:15.960 "nvme_iov_md": false 00:08:15.960 }, 00:08:15.960 "memory_domains": [ 00:08:15.960 { 00:08:15.960 "dma_device_id": "system", 00:08:15.960 "dma_device_type": 1 00:08:15.960 }, 00:08:15.960 { 00:08:15.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.960 "dma_device_type": 2 00:08:15.960 } 00:08:15.960 ], 00:08:15.960 "driver_specific": {} 00:08:15.960 } 00:08:15.960 ]' 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:15.960 14:50:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:16.221 14:50:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:16.221 14:50:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:16.221 14:50:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:16.221 14:50:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:16.221 14:50:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:17.603 14:50:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:17.603 14:50:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:17.603 14:50:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:17.603 14:50:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:17.603 14:50:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:19.509 14:50:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.891 ************************************ 00:08:20.891 START TEST filesystem_in_capsule_ext4 00:08:20.891 ************************************ 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:20.891 mke2fs 1.46.5 (30-Dec-2021) 00:08:20.891 Discarding device blocks: 0/522240 done 00:08:20.891 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:20.891 Filesystem UUID: 7545eb90-c8de-40b5-8071-4e4f9cf409e5 00:08:20.891 Superblock backups stored on blocks: 00:08:20.891 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:20.891 00:08:20.891 Allocating group tables: 0/64 done 00:08:20.891 Writing inode tables: 0/64 done 00:08:20.891 Creating journal (8192 blocks): done 00:08:20.891 Writing superblocks and filesystem accounting information: 0/64 done 00:08:20.891 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1655882 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.891 00:08:20.891 real 0m0.131s 00:08:20.891 user 0m0.023s 00:08:20.891 sys 0m0.049s 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:20.891 ************************************ 00:08:20.891 END TEST filesystem_in_capsule_ext4 00:08:20.891 ************************************ 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.891 ************************************ 00:08:20.891 START TEST filesystem_in_capsule_btrfs 00:08:20.891 ************************************ 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:20.891 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:20.891 btrfs-progs v6.6.2 00:08:20.891 See https://btrfs.readthedocs.io for more information. 00:08:20.891 00:08:20.891 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:20.891 NOTE: several default settings have changed in version 5.15, please make sure 00:08:20.891 this does not affect your deployments: 00:08:20.891 - DUP for metadata (-m dup) 00:08:20.891 - enabled no-holes (-O no-holes) 00:08:20.891 - enabled free-space-tree (-R free-space-tree) 00:08:20.891 00:08:20.891 Label: (null) 00:08:20.891 UUID: 0b64f710-7cf5-45b3-97a8-761aae89c510 00:08:20.891 Node size: 16384 00:08:20.891 Sector size: 4096 00:08:20.891 Filesystem size: 510.00MiB 00:08:20.891 Block group profiles: 00:08:20.891 Data: single 8.00MiB 00:08:20.891 Metadata: DUP 32.00MiB 00:08:20.891 System: DUP 8.00MiB 00:08:20.892 SSD detected: yes 00:08:20.892 Zoned device: no 00:08:20.892 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:20.892 Runtime features: free-space-tree 00:08:20.892 Checksum: crc32c 00:08:20.892 Number of devices: 1 00:08:20.892 Devices: 00:08:20.892 ID SIZE PATH 00:08:20.892 1 510.00MiB /dev/nvme0n1p1 00:08:20.892 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1655882 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.892 00:08:20.892 real 0m0.129s 00:08:20.892 user 0m0.022s 00:08:20.892 sys 0m0.057s 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.892 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:20.892 ************************************ 00:08:20.892 END TEST filesystem_in_capsule_btrfs 00:08:20.892 ************************************ 00:08:21.152 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:21.152 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:21.152 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:21.152 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.152 14:50:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.152 ************************************ 00:08:21.152 START TEST filesystem_in_capsule_xfs 00:08:21.153 ************************************ 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:21.153 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:21.153 = sectsz=512 attr=2, projid32bit=1 00:08:21.153 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:21.153 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:21.153 data = bsize=4096 blocks=130560, imaxpct=25 00:08:21.153 = sunit=0 swidth=0 blks 00:08:21.153 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:21.153 log =internal log bsize=4096 blocks=16384, version=2 00:08:21.153 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:21.153 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:21.153 Discarding blocks...Done. 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1655882 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.153 00:08:21.153 real 0m0.147s 00:08:21.153 user 0m0.020s 00:08:21.153 sys 0m0.051s 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:21.153 ************************************ 00:08:21.153 END TEST filesystem_in_capsule_xfs 00:08:21.153 ************************************ 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:21.153 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.413 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:21.413 14:50:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1655882 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1655882 ']' 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1655882 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1655882 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1655882' 00:08:22.797 killing process with pid 1655882 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1655882 00:08:22.797 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1655882 00:08:23.057 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.057 00:08:23.057 real 0m8.227s 00:08:23.058 user 0m32.114s 00:08:23.058 sys 0m0.967s 00:08:23.058 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.058 14:50:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:23.058 ************************************ 00:08:23.058 END TEST nvmf_filesystem_in_capsule 00:08:23.058 ************************************ 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:23.058 rmmod nvme_rdma 00:08:23.058 rmmod nvme_fabrics 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:23.058 00:08:23.058 real 0m25.026s 00:08:23.058 user 1m6.533s 00:08:23.058 sys 0m8.100s 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.058 14:50:39 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.058 ************************************ 00:08:23.058 END TEST nvmf_filesystem 00:08:23.058 ************************************ 00:08:23.058 14:50:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:23.058 14:50:39 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:23.058 14:50:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.058 14:50:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.058 14:50:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:23.319 ************************************ 00:08:23.319 START TEST nvmf_target_discovery 00:08:23.319 ************************************ 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:23.319 * Looking for test storage... 00:08:23.319 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.319 14:50:39 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.459 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.459 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.459 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.459 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.459 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:31.460 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:31.460 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:31.460 Found net devices under 0000:98:00.0: mlx_0_0 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:31.460 Found net devices under 0000:98:00.1: mlx_0_1 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:31.460 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.460 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:31.460 altname enp152s0f0np0 00:08:31.460 altname ens817f0np0 00:08:31.460 inet 192.168.100.8/24 scope global mlx_0_0 00:08:31.460 valid_lft forever preferred_lft forever 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:31.460 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.460 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:31.460 altname enp152s0f1np1 00:08:31.460 altname ens817f1np1 00:08:31.460 inet 192.168.100.9/24 scope global mlx_0_1 00:08:31.460 valid_lft forever preferred_lft forever 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.460 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:31.461 192.168.100.9' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:31.461 192.168.100.9' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:31.461 192.168.100.9' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1661865 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1661865 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1661865 ']' 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.461 14:50:47 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:31.461 [2024-07-15 14:50:47.376841] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:31.461 [2024-07-15 14:50:47.376910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.461 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.461 [2024-07-15 14:50:47.453041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.722 [2024-07-15 14:50:47.528565] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.722 [2024-07-15 14:50:47.528606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.722 [2024-07-15 14:50:47.528614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.722 [2024-07-15 14:50:47.528621] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.722 [2024-07-15 14:50:47.528626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.722 [2024-07-15 14:50:47.528765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.722 [2024-07-15 14:50:47.528888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.722 [2024-07-15 14:50:47.529044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.722 [2024-07-15 14:50:47.529045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.293 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.293 [2024-07-15 14:50:48.241907] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1418200/0x141c6f0) succeed. 00:08:32.293 [2024-07-15 14:50:48.255446] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1419840/0x145dd80) succeed. 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 Null1 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 [2024-07-15 14:50:48.431276] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 Null2 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 Null3 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 Null4 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.557 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:08:32.818 00:08:32.818 Discovery Log Number of Records 6, Generation counter 6 00:08:32.818 =====Discovery Log Entry 0====== 00:08:32.818 trtype: rdma 00:08:32.818 adrfam: ipv4 00:08:32.818 subtype: current discovery subsystem 00:08:32.818 treq: not required 00:08:32.818 portid: 0 00:08:32.818 trsvcid: 4420 00:08:32.818 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:32.818 traddr: 192.168.100.8 00:08:32.818 eflags: explicit discovery connections, duplicate discovery information 00:08:32.818 rdma_prtype: not specified 00:08:32.818 rdma_qptype: connected 00:08:32.818 rdma_cms: rdma-cm 00:08:32.818 rdma_pkey: 0x0000 00:08:32.818 =====Discovery Log Entry 1====== 00:08:32.818 trtype: rdma 00:08:32.818 adrfam: ipv4 00:08:32.818 subtype: nvme subsystem 00:08:32.818 treq: not required 00:08:32.818 portid: 0 00:08:32.818 trsvcid: 4420 00:08:32.818 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:32.818 traddr: 192.168.100.8 00:08:32.818 eflags: none 00:08:32.818 rdma_prtype: not specified 00:08:32.818 rdma_qptype: connected 00:08:32.818 rdma_cms: rdma-cm 00:08:32.818 rdma_pkey: 0x0000 00:08:32.818 =====Discovery Log Entry 2====== 00:08:32.818 trtype: rdma 00:08:32.818 adrfam: ipv4 00:08:32.818 subtype: nvme subsystem 00:08:32.818 treq: not required 00:08:32.818 portid: 0 00:08:32.818 trsvcid: 4420 00:08:32.818 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:32.818 traddr: 192.168.100.8 00:08:32.818 eflags: none 00:08:32.818 rdma_prtype: not specified 00:08:32.818 rdma_qptype: connected 00:08:32.818 rdma_cms: rdma-cm 00:08:32.818 rdma_pkey: 0x0000 00:08:32.818 =====Discovery Log Entry 3====== 00:08:32.818 trtype: rdma 00:08:32.818 adrfam: ipv4 00:08:32.818 subtype: nvme subsystem 00:08:32.818 treq: not required 00:08:32.818 portid: 0 00:08:32.818 trsvcid: 4420 00:08:32.818 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:32.818 traddr: 192.168.100.8 00:08:32.818 eflags: none 00:08:32.818 rdma_prtype: not specified 00:08:32.818 rdma_qptype: connected 00:08:32.818 rdma_cms: rdma-cm 00:08:32.818 rdma_pkey: 0x0000 00:08:32.818 =====Discovery Log Entry 4====== 00:08:32.818 trtype: rdma 00:08:32.818 adrfam: ipv4 00:08:32.818 subtype: nvme subsystem 00:08:32.818 treq: not required 00:08:32.818 portid: 0 00:08:32.818 trsvcid: 4420 00:08:32.818 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:32.818 traddr: 192.168.100.8 00:08:32.818 eflags: none 00:08:32.818 rdma_prtype: not specified 00:08:32.818 rdma_qptype: connected 00:08:32.818 rdma_cms: rdma-cm 00:08:32.818 rdma_pkey: 0x0000 00:08:32.818 =====Discovery Log Entry 5====== 00:08:32.818 trtype: rdma 00:08:32.818 adrfam: ipv4 00:08:32.818 subtype: discovery subsystem referral 00:08:32.818 treq: not required 00:08:32.818 portid: 0 00:08:32.818 trsvcid: 4430 00:08:32.818 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:32.818 traddr: 192.168.100.8 00:08:32.818 eflags: none 00:08:32.818 rdma_prtype: unrecognized 00:08:32.818 rdma_qptype: unrecognized 00:08:32.818 rdma_cms: unrecognized 00:08:32.818 rdma_pkey: 0x0000 00:08:32.818 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:32.818 Perform nvmf subsystem discovery via RPC 00:08:32.818 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:32.818 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.818 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.818 [ 00:08:32.818 { 00:08:32.818 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:32.818 "subtype": "Discovery", 00:08:32.818 "listen_addresses": [ 00:08:32.818 { 00:08:32.818 "trtype": "RDMA", 00:08:32.818 "adrfam": "IPv4", 00:08:32.818 "traddr": "192.168.100.8", 00:08:32.818 "trsvcid": "4420" 00:08:32.818 } 00:08:32.818 ], 00:08:32.818 "allow_any_host": true, 00:08:32.818 "hosts": [] 00:08:32.818 }, 00:08:32.818 { 00:08:32.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:32.818 "subtype": "NVMe", 00:08:32.818 "listen_addresses": [ 00:08:32.818 { 00:08:32.819 "trtype": "RDMA", 00:08:32.819 "adrfam": "IPv4", 00:08:32.819 "traddr": "192.168.100.8", 00:08:32.819 "trsvcid": "4420" 00:08:32.819 } 00:08:32.819 ], 00:08:32.819 "allow_any_host": true, 00:08:32.819 "hosts": [], 00:08:32.819 "serial_number": "SPDK00000000000001", 00:08:32.819 "model_number": "SPDK bdev Controller", 00:08:32.819 "max_namespaces": 32, 00:08:32.819 "min_cntlid": 1, 00:08:32.819 "max_cntlid": 65519, 00:08:32.819 "namespaces": [ 00:08:32.819 { 00:08:32.819 "nsid": 1, 00:08:32.819 "bdev_name": "Null1", 00:08:32.819 "name": "Null1", 00:08:32.819 "nguid": "E2E2E0B96A8C413781A35511C0224C8F", 00:08:32.819 "uuid": "e2e2e0b9-6a8c-4137-81a3-5511c0224c8f" 00:08:32.819 } 00:08:32.819 ] 00:08:32.819 }, 00:08:32.819 { 00:08:32.819 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:32.819 "subtype": "NVMe", 00:08:32.819 "listen_addresses": [ 00:08:32.819 { 00:08:32.819 "trtype": "RDMA", 00:08:32.819 "adrfam": "IPv4", 00:08:32.819 "traddr": "192.168.100.8", 00:08:32.819 "trsvcid": "4420" 00:08:32.819 } 00:08:32.819 ], 00:08:32.819 "allow_any_host": true, 00:08:32.819 "hosts": [], 00:08:32.819 "serial_number": "SPDK00000000000002", 00:08:32.819 "model_number": "SPDK bdev Controller", 00:08:32.819 "max_namespaces": 32, 00:08:32.819 "min_cntlid": 1, 00:08:32.819 "max_cntlid": 65519, 00:08:32.819 "namespaces": [ 00:08:32.819 { 00:08:32.819 "nsid": 1, 00:08:32.819 "bdev_name": "Null2", 00:08:32.819 "name": "Null2", 00:08:32.819 "nguid": "6493219943BC4629806E7AD0F5C50377", 00:08:32.819 "uuid": "64932199-43bc-4629-806e-7ad0f5c50377" 00:08:32.819 } 00:08:32.819 ] 00:08:32.819 }, 00:08:32.819 { 00:08:32.819 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:32.819 "subtype": "NVMe", 00:08:32.819 "listen_addresses": [ 00:08:32.819 { 00:08:32.819 "trtype": "RDMA", 00:08:32.819 "adrfam": "IPv4", 00:08:32.819 "traddr": "192.168.100.8", 00:08:32.819 "trsvcid": "4420" 00:08:32.819 } 00:08:32.819 ], 00:08:32.819 "allow_any_host": true, 00:08:32.819 "hosts": [], 00:08:32.819 "serial_number": "SPDK00000000000003", 00:08:32.819 "model_number": "SPDK bdev Controller", 00:08:32.819 "max_namespaces": 32, 00:08:32.819 "min_cntlid": 1, 00:08:32.819 "max_cntlid": 65519, 00:08:32.819 "namespaces": [ 00:08:32.819 { 00:08:32.819 "nsid": 1, 00:08:32.819 "bdev_name": "Null3", 00:08:32.819 "name": "Null3", 00:08:32.819 "nguid": "53803665388C48258E1BDF77D167F1AE", 00:08:32.819 "uuid": "53803665-388c-4825-8e1b-df77d167f1ae" 00:08:32.819 } 00:08:32.819 ] 00:08:32.819 }, 00:08:32.819 { 00:08:32.819 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:32.819 "subtype": "NVMe", 00:08:32.819 "listen_addresses": [ 00:08:32.819 { 00:08:32.819 "trtype": "RDMA", 00:08:32.819 "adrfam": "IPv4", 00:08:32.819 "traddr": "192.168.100.8", 00:08:32.819 "trsvcid": "4420" 00:08:32.819 } 00:08:32.819 ], 00:08:32.819 "allow_any_host": true, 00:08:32.819 "hosts": [], 00:08:32.819 "serial_number": "SPDK00000000000004", 00:08:32.819 "model_number": "SPDK bdev Controller", 00:08:32.819 "max_namespaces": 32, 00:08:32.819 "min_cntlid": 1, 00:08:32.819 "max_cntlid": 65519, 00:08:32.819 "namespaces": [ 00:08:32.819 { 00:08:32.819 "nsid": 1, 00:08:32.819 "bdev_name": "Null4", 00:08:32.819 "name": "Null4", 00:08:32.819 "nguid": "461057E575464F5A93CEB8491BE84FC9", 00:08:32.819 "uuid": "461057e5-7546-4f5a-93ce-b8491be84fc9" 00:08:32.819 } 00:08:32.819 ] 00:08:32.819 } 00:08:32.819 ] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.819 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:32.819 rmmod nvme_rdma 00:08:32.819 rmmod nvme_fabrics 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1661865 ']' 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1661865 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1661865 ']' 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1661865 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1661865 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1661865' 00:08:33.079 killing process with pid 1661865 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1661865 00:08:33.079 14:50:48 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1661865 00:08:33.340 14:50:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.340 14:50:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:33.340 00:08:33.340 real 0m10.029s 00:08:33.340 user 0m9.051s 00:08:33.340 sys 0m6.266s 00:08:33.340 14:50:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.340 14:50:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:33.340 ************************************ 00:08:33.340 END TEST nvmf_target_discovery 00:08:33.340 ************************************ 00:08:33.340 14:50:49 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:33.340 14:50:49 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:33.340 14:50:49 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.340 14:50:49 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.340 14:50:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:33.340 ************************************ 00:08:33.340 START TEST nvmf_referrals 00:08:33.340 ************************************ 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:33.340 * Looking for test storage... 00:08:33.340 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.340 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.341 14:50:49 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:41.500 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:41.500 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:41.500 Found net devices under 0000:98:00.0: mlx_0_0 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:41.500 Found net devices under 0000:98:00.1: mlx_0_1 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.500 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:41.501 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.501 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:41.501 altname enp152s0f0np0 00:08:41.501 altname ens817f0np0 00:08:41.501 inet 192.168.100.8/24 scope global mlx_0_0 00:08:41.501 valid_lft forever preferred_lft forever 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:41.501 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.501 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:41.501 altname enp152s0f1np1 00:08:41.501 altname ens817f1np1 00:08:41.501 inet 192.168.100.9/24 scope global mlx_0_1 00:08:41.501 valid_lft forever preferred_lft forever 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:41.501 192.168.100.9' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:41.501 192.168.100.9' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:41.501 192.168.100.9' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1666494 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1666494 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1666494 ']' 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.501 14:50:57 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.501 [2024-07-15 14:50:57.436343] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:41.501 [2024-07-15 14:50:57.436399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.501 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.501 [2024-07-15 14:50:57.505201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.763 [2024-07-15 14:50:57.574750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.763 [2024-07-15 14:50:57.574786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.763 [2024-07-15 14:50:57.574794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.763 [2024-07-15 14:50:57.574801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.763 [2024-07-15 14:50:57.574806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.763 [2024-07-15 14:50:57.574942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.763 [2024-07-15 14:50:57.575068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.763 [2024-07-15 14:50:57.575224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.763 [2024-07-15 14:50:57.575224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.334 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 [2024-07-15 14:50:58.287681] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x832200/0x8366f0) succeed. 00:08:42.334 [2024-07-15 14:50:58.302204] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x833840/0x877d80) succeed. 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 [2024-07-15 14:50:58.430087] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.596 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.857 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:43.118 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.118 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:43.118 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.118 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.118 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:43.118 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.118 14:50:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:43.118 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:43.379 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.641 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:43.902 rmmod nvme_rdma 00:08:43.902 rmmod nvme_fabrics 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1666494 ']' 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1666494 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1666494 ']' 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1666494 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1666494 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1666494' 00:08:43.902 killing process with pid 1666494 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1666494 00:08:43.902 14:50:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1666494 00:08:44.164 14:51:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.164 14:51:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:44.164 00:08:44.164 real 0m10.773s 00:08:44.164 user 0m12.919s 00:08:44.164 sys 0m6.427s 00:08:44.164 14:51:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.164 14:51:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:44.164 ************************************ 00:08:44.164 END TEST nvmf_referrals 00:08:44.164 ************************************ 00:08:44.164 14:51:00 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:44.164 14:51:00 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:44.164 14:51:00 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.164 14:51:00 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.164 14:51:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:44.164 ************************************ 00:08:44.164 START TEST nvmf_connect_disconnect 00:08:44.164 ************************************ 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:44.164 * Looking for test storage... 00:08:44.164 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.164 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.425 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.425 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.425 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.425 14:51:00 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.566 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:52.567 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:52.567 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:52.567 Found net devices under 0000:98:00.0: mlx_0_0 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:52.567 Found net devices under 0000:98:00.1: mlx_0_1 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:52.567 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.567 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:52.567 altname enp152s0f0np0 00:08:52.567 altname ens817f0np0 00:08:52.567 inet 192.168.100.8/24 scope global mlx_0_0 00:08:52.567 valid_lft forever preferred_lft forever 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:52.567 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.567 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:52.567 altname enp152s0f1np1 00:08:52.567 altname ens817f1np1 00:08:52.567 inet 192.168.100.9/24 scope global mlx_0_1 00:08:52.567 valid_lft forever preferred_lft forever 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.567 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:52.568 192.168.100.9' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:52.568 192.168.100.9' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:52.568 192.168.100.9' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1671585 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1671585 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1671585 ']' 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.568 14:51:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.568 [2024-07-15 14:51:08.441784] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:52.568 [2024-07-15 14:51:08.441852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.568 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.568 [2024-07-15 14:51:08.516645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.568 [2024-07-15 14:51:08.594098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.568 [2024-07-15 14:51:08.594139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.568 [2024-07-15 14:51:08.594146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.568 [2024-07-15 14:51:08.594157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.568 [2024-07-15 14:51:08.594162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.568 [2024-07-15 14:51:08.594298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.568 [2024-07-15 14:51:08.594548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.568 [2024-07-15 14:51:08.594362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.568 [2024-07-15 14:51:08.594549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 [2024-07-15 14:51:09.275899] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:53.509 [2024-07-15 14:51:09.306563] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e6e200/0x1e726f0) succeed. 00:08:53.509 [2024-07-15 14:51:09.320658] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e6f840/0x1eb3d80) succeed. 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:53.509 [2024-07-15 14:51:09.478639] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:53.509 14:51:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:58.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.867 14:51:32 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:17.867 rmmod nvme_rdma 00:09:17.867 rmmod nvme_fabrics 00:09:17.867 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.867 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:17.867 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:17.867 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1671585 ']' 00:09:17.867 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1671585 00:09:17.867 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1671585 ']' 00:09:17.867 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1671585 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1671585 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1671585' 00:09:17.868 killing process with pid 1671585 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1671585 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1671585 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:17.868 00:09:17.868 real 0m33.208s 00:09:17.868 user 1m40.551s 00:09:17.868 sys 0m7.066s 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.868 14:51:33 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 ************************************ 00:09:17.868 END TEST nvmf_connect_disconnect 00:09:17.868 ************************************ 00:09:17.868 14:51:33 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:17.868 14:51:33 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:17.868 14:51:33 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:17.868 14:51:33 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.868 14:51:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:17.868 ************************************ 00:09:17.868 START TEST nvmf_multitarget 00:09:17.868 ************************************ 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:17.868 * Looking for test storage... 00:09:17.868 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.868 14:51:33 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:26.008 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:26.008 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:26.008 Found net devices under 0000:98:00.0: mlx_0_0 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:26.008 Found net devices under 0000:98:00.1: mlx_0_1 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.008 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:26.009 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.009 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:26.009 altname enp152s0f0np0 00:09:26.009 altname ens817f0np0 00:09:26.009 inet 192.168.100.8/24 scope global mlx_0_0 00:09:26.009 valid_lft forever preferred_lft forever 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:26.009 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:26.009 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:26.009 altname enp152s0f1np1 00:09:26.009 altname ens817f1np1 00:09:26.009 inet 192.168.100.9/24 scope global mlx_0_1 00:09:26.009 valid_lft forever preferred_lft forever 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:26.009 192.168.100.9' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:26.009 192.168.100.9' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:26.009 192.168.100.9' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1681295 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1681295 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1681295 ']' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:26.009 [2024-07-15 14:51:41.340378] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:26.009 [2024-07-15 14:51:41.340430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.009 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.009 [2024-07-15 14:51:41.408199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.009 [2024-07-15 14:51:41.473695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.009 [2024-07-15 14:51:41.473733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.009 [2024-07-15 14:51:41.473740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.009 [2024-07-15 14:51:41.473747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.009 [2024-07-15 14:51:41.473752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.009 [2024-07-15 14:51:41.477245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.009 [2024-07-15 14:51:41.477276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.009 [2024-07-15 14:51:41.477438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.009 [2024-07-15 14:51:41.477528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:26.009 "nvmf_tgt_1" 00:09:26.009 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:26.009 "nvmf_tgt_2" 00:09:26.010 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:26.010 14:51:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:26.010 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:26.010 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:26.270 true 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:26.270 true 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.270 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:26.270 rmmod nvme_rdma 00:09:26.530 rmmod nvme_fabrics 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1681295 ']' 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1681295 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1681295 ']' 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1681295 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:26.530 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1681295 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1681295' 00:09:26.531 killing process with pid 1681295 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1681295 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1681295 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:26.531 00:09:26.531 real 0m9.178s 00:09:26.531 user 0m7.229s 00:09:26.531 sys 0m6.190s 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.531 14:51:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:26.531 ************************************ 00:09:26.531 END TEST nvmf_multitarget 00:09:26.531 ************************************ 00:09:26.531 14:51:42 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:26.531 14:51:42 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:26.531 14:51:42 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:26.531 14:51:42 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.531 14:51:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:26.792 ************************************ 00:09:26.792 START TEST nvmf_rpc 00:09:26.792 ************************************ 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:26.792 * Looking for test storage... 00:09:26.792 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.792 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:26.793 14:51:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.928 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:34.929 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:34.929 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:34.929 Found net devices under 0000:98:00.0: mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:34.929 Found net devices under 0000:98:00.1: mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:34.929 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.929 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:34.929 altname enp152s0f0np0 00:09:34.929 altname ens817f0np0 00:09:34.929 inet 192.168.100.8/24 scope global mlx_0_0 00:09:34.929 valid_lft forever preferred_lft forever 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:34.929 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:34.929 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:34.929 altname enp152s0f1np1 00:09:34.929 altname ens817f1np1 00:09:34.929 inet 192.168.100.9/24 scope global mlx_0_1 00:09:34.929 valid_lft forever preferred_lft forever 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:34.929 192.168.100.9' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:34.929 192.168.100.9' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:34.929 192.168.100.9' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1685677 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1685677 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1685677 ']' 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.929 14:51:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.189 [2024-07-15 14:51:51.024514] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:35.189 [2024-07-15 14:51:51.024572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.189 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.189 [2024-07-15 14:51:51.093222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.189 [2024-07-15 14:51:51.159168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.189 [2024-07-15 14:51:51.159208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.189 [2024-07-15 14:51:51.159215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.189 [2024-07-15 14:51:51.159223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.189 [2024-07-15 14:51:51.159228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.189 [2024-07-15 14:51:51.159317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.189 [2024-07-15 14:51:51.159455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.189 [2024-07-15 14:51:51.159611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.189 [2024-07-15 14:51:51.159611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.760 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.760 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:35.760 14:51:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.760 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.760 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.020 14:51:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.020 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:36.020 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.020 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.020 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.020 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:36.020 "tick_rate": 2400000000, 00:09:36.020 "poll_groups": [ 00:09:36.020 { 00:09:36.020 "name": "nvmf_tgt_poll_group_000", 00:09:36.020 "admin_qpairs": 0, 00:09:36.020 "io_qpairs": 0, 00:09:36.020 "current_admin_qpairs": 0, 00:09:36.020 "current_io_qpairs": 0, 00:09:36.020 "pending_bdev_io": 0, 00:09:36.020 "completed_nvme_io": 0, 00:09:36.020 "transports": [] 00:09:36.020 }, 00:09:36.020 { 00:09:36.020 "name": "nvmf_tgt_poll_group_001", 00:09:36.020 "admin_qpairs": 0, 00:09:36.020 "io_qpairs": 0, 00:09:36.020 "current_admin_qpairs": 0, 00:09:36.020 "current_io_qpairs": 0, 00:09:36.020 "pending_bdev_io": 0, 00:09:36.020 "completed_nvme_io": 0, 00:09:36.020 "transports": [] 00:09:36.020 }, 00:09:36.020 { 00:09:36.020 "name": "nvmf_tgt_poll_group_002", 00:09:36.020 "admin_qpairs": 0, 00:09:36.020 "io_qpairs": 0, 00:09:36.020 "current_admin_qpairs": 0, 00:09:36.020 "current_io_qpairs": 0, 00:09:36.020 "pending_bdev_io": 0, 00:09:36.020 "completed_nvme_io": 0, 00:09:36.020 "transports": [] 00:09:36.020 }, 00:09:36.020 { 00:09:36.020 "name": "nvmf_tgt_poll_group_003", 00:09:36.020 "admin_qpairs": 0, 00:09:36.020 "io_qpairs": 0, 00:09:36.020 "current_admin_qpairs": 0, 00:09:36.020 "current_io_qpairs": 0, 00:09:36.020 "pending_bdev_io": 0, 00:09:36.020 "completed_nvme_io": 0, 00:09:36.020 "transports": [] 00:09:36.020 } 00:09:36.020 ] 00:09:36.020 }' 00:09:36.020 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.021 14:51:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.021 [2024-07-15 14:51:51.998756] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x777210/0x77b700) succeed. 00:09:36.021 [2024-07-15 14:51:52.013474] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x778850/0x7bcd90) succeed. 00:09:36.282 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.282 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:36.282 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.282 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.282 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.282 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:36.282 "tick_rate": 2400000000, 00:09:36.282 "poll_groups": [ 00:09:36.282 { 00:09:36.282 "name": "nvmf_tgt_poll_group_000", 00:09:36.282 "admin_qpairs": 0, 00:09:36.282 "io_qpairs": 0, 00:09:36.282 "current_admin_qpairs": 0, 00:09:36.282 "current_io_qpairs": 0, 00:09:36.282 "pending_bdev_io": 0, 00:09:36.282 "completed_nvme_io": 0, 00:09:36.282 "transports": [ 00:09:36.282 { 00:09:36.282 "trtype": "RDMA", 00:09:36.282 "pending_data_buffer": 0, 00:09:36.282 "devices": [ 00:09:36.282 { 00:09:36.282 "name": "mlx5_0", 00:09:36.282 "polls": 16277, 00:09:36.282 "idle_polls": 16277, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.282 "recv_doorbell_updates": 1 00:09:36.282 }, 00:09:36.282 { 00:09:36.282 "name": "mlx5_1", 00:09:36.282 "polls": 16277, 00:09:36.282 "idle_polls": 16277, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.282 "recv_doorbell_updates": 1 00:09:36.282 } 00:09:36.282 ] 00:09:36.282 } 00:09:36.282 ] 00:09:36.282 }, 00:09:36.282 { 00:09:36.282 "name": "nvmf_tgt_poll_group_001", 00:09:36.282 "admin_qpairs": 0, 00:09:36.282 "io_qpairs": 0, 00:09:36.282 "current_admin_qpairs": 0, 00:09:36.282 "current_io_qpairs": 0, 00:09:36.282 "pending_bdev_io": 0, 00:09:36.282 "completed_nvme_io": 0, 00:09:36.282 "transports": [ 00:09:36.282 { 00:09:36.282 "trtype": "RDMA", 00:09:36.282 "pending_data_buffer": 0, 00:09:36.282 "devices": [ 00:09:36.282 { 00:09:36.282 "name": "mlx5_0", 00:09:36.282 "polls": 16336, 00:09:36.282 "idle_polls": 16336, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.282 "recv_doorbell_updates": 1 00:09:36.282 }, 00:09:36.282 { 00:09:36.282 "name": "mlx5_1", 00:09:36.282 "polls": 16336, 00:09:36.282 "idle_polls": 16336, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.282 "recv_doorbell_updates": 1 00:09:36.282 } 00:09:36.282 ] 00:09:36.282 } 00:09:36.282 ] 00:09:36.282 }, 00:09:36.282 { 00:09:36.282 "name": "nvmf_tgt_poll_group_002", 00:09:36.282 "admin_qpairs": 0, 00:09:36.282 "io_qpairs": 0, 00:09:36.282 "current_admin_qpairs": 0, 00:09:36.282 "current_io_qpairs": 0, 00:09:36.282 "pending_bdev_io": 0, 00:09:36.282 "completed_nvme_io": 0, 00:09:36.282 "transports": [ 00:09:36.282 { 00:09:36.282 "trtype": "RDMA", 00:09:36.282 "pending_data_buffer": 0, 00:09:36.282 "devices": [ 00:09:36.282 { 00:09:36.282 "name": "mlx5_0", 00:09:36.282 "polls": 5870, 00:09:36.282 "idle_polls": 5870, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.282 "recv_doorbell_updates": 1 00:09:36.282 }, 00:09:36.282 { 00:09:36.282 "name": "mlx5_1", 00:09:36.282 "polls": 5870, 00:09:36.282 "idle_polls": 5870, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.282 "recv_doorbell_updates": 1 00:09:36.282 } 00:09:36.282 ] 00:09:36.282 } 00:09:36.282 ] 00:09:36.282 }, 00:09:36.282 { 00:09:36.282 "name": "nvmf_tgt_poll_group_003", 00:09:36.282 "admin_qpairs": 0, 00:09:36.282 "io_qpairs": 0, 00:09:36.282 "current_admin_qpairs": 0, 00:09:36.282 "current_io_qpairs": 0, 00:09:36.282 "pending_bdev_io": 0, 00:09:36.282 "completed_nvme_io": 0, 00:09:36.282 "transports": [ 00:09:36.282 { 00:09:36.282 "trtype": "RDMA", 00:09:36.282 "pending_data_buffer": 0, 00:09:36.282 "devices": [ 00:09:36.282 { 00:09:36.282 "name": "mlx5_0", 00:09:36.282 "polls": 892, 00:09:36.282 "idle_polls": 892, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.282 "recv_doorbell_updates": 1 00:09:36.282 }, 00:09:36.282 { 00:09:36.282 "name": "mlx5_1", 00:09:36.282 "polls": 892, 00:09:36.282 "idle_polls": 892, 00:09:36.282 "completions": 0, 00:09:36.282 "requests": 0, 00:09:36.282 "request_latency": 0, 00:09:36.282 "pending_free_request": 0, 00:09:36.282 "pending_rdma_read": 0, 00:09:36.282 "pending_rdma_write": 0, 00:09:36.282 "pending_rdma_send": 0, 00:09:36.282 "total_send_wrs": 0, 00:09:36.282 "send_doorbell_updates": 0, 00:09:36.282 "total_recv_wrs": 4096, 00:09:36.283 "recv_doorbell_updates": 1 00:09:36.283 } 00:09:36.283 ] 00:09:36.283 } 00:09:36.283 ] 00:09:36.283 } 00:09:36.283 ] 00:09:36.283 }' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:09:36.283 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.543 Malloc1 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.543 [2024-07-15 14:51:52.482401] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:36.543 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:09:36.544 [2024-07-15 14:51:52.537826] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:36.544 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:36.544 could not add new controller: failed to write to nvme-fabrics device 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.544 14:51:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:38.457 14:51:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.457 14:51:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.457 14:51:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.457 14:51:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:38.457 14:51:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.369 14:51:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.369 14:51:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.369 14:51:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.369 14:51:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:40.369 14:51:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.369 14:51:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:40.369 14:51:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.312 14:51:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.312 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:41.312 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:41.312 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.312 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:41.312 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:41.573 [2024-07-15 14:51:57.451951] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:41.573 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:41.573 could not add new controller: failed to write to nvme-fabrics device 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.573 14:51:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:42.954 14:51:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.954 14:51:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:42.954 14:51:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.954 14:51:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:42.954 14:51:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:45.514 14:52:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:45.514 14:52:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:45.514 14:52:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.514 14:52:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:45.514 14:52:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.514 14:52:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:45.514 14:52:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.533 [2024-07-15 14:52:02.319172] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.533 14:52:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:47.915 14:52:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:47.915 14:52:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:47.915 14:52:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.915 14:52:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:47.915 14:52:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:49.826 14:52:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:49.826 14:52:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:49.826 14:52:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.826 14:52:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:49.826 14:52:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.826 14:52:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:49.826 14:52:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.208 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.209 [2024-07-15 14:52:07.160215] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.209 14:52:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:52.593 14:52:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.593 14:52:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:52.593 14:52:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.593 14:52:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:52.593 14:52:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:55.139 14:52:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:55.139 14:52:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:55.139 14:52:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.139 14:52:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:55.139 14:52:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.139 14:52:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:55.139 14:52:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.082 [2024-07-15 14:52:11.981910] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.082 14:52:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.082 14:52:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.082 14:52:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:57.467 14:52:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.467 14:52:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:57.467 14:52:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.467 14:52:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:57.467 14:52:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:59.377 14:52:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:59.377 14:52:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:59.377 14:52:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.377 14:52:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:59.377 14:52:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.377 14:52:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:59.377 14:52:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:00.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.758 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.758 [2024-07-15 14:52:16.817138] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.018 14:52:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:02.398 14:52:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.398 14:52:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:02.398 14:52:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.398 14:52:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:02.398 14:52:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:04.310 14:52:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:04.310 14:52:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:04.310 14:52:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.310 14:52:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:04.310 14:52:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.310 14:52:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:04.310 14:52:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.693 [2024-07-15 14:52:21.729944] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.693 14:52:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:07.604 14:52:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.604 14:52:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:07.604 14:52:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.604 14:52:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:07.604 14:52:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:09.515 14:52:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:09.515 14:52:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:09.515 14:52:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.515 14:52:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:09.515 14:52:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.515 14:52:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:09.515 14:52:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.458 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.458 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:10.458 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:10.458 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.458 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:10.458 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.458 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 [2024-07-15 14:52:26.414846] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 [2024-07-15 14:52:26.474996] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.459 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 [2024-07-15 14:52:26.535190] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 [2024-07-15 14:52:26.591416] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 [2024-07-15 14:52:26.651623] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:10.721 "tick_rate": 2400000000, 00:10:10.721 "poll_groups": [ 00:10:10.721 { 00:10:10.721 "name": "nvmf_tgt_poll_group_000", 00:10:10.721 "admin_qpairs": 2, 00:10:10.721 "io_qpairs": 27, 00:10:10.721 "current_admin_qpairs": 0, 00:10:10.721 "current_io_qpairs": 0, 00:10:10.721 "pending_bdev_io": 0, 00:10:10.721 "completed_nvme_io": 127, 00:10:10.721 "transports": [ 00:10:10.721 { 00:10:10.721 "trtype": "RDMA", 00:10:10.721 "pending_data_buffer": 0, 00:10:10.721 "devices": [ 00:10:10.721 { 00:10:10.721 "name": "mlx5_0", 00:10:10.721 "polls": 5041728, 00:10:10.721 "idle_polls": 5041407, 00:10:10.721 "completions": 363, 00:10:10.721 "requests": 181, 00:10:10.721 "request_latency": 29739704, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 307, 00:10:10.721 "send_doorbell_updates": 158, 00:10:10.721 "total_recv_wrs": 4277, 00:10:10.721 "recv_doorbell_updates": 158 00:10:10.721 }, 00:10:10.721 { 00:10:10.721 "name": "mlx5_1", 00:10:10.721 "polls": 5041728, 00:10:10.721 "idle_polls": 5041728, 00:10:10.721 "completions": 0, 00:10:10.721 "requests": 0, 00:10:10.721 "request_latency": 0, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 0, 00:10:10.721 "send_doorbell_updates": 0, 00:10:10.721 "total_recv_wrs": 4096, 00:10:10.721 "recv_doorbell_updates": 1 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 }, 00:10:10.721 { 00:10:10.721 "name": "nvmf_tgt_poll_group_001", 00:10:10.721 "admin_qpairs": 2, 00:10:10.721 "io_qpairs": 26, 00:10:10.721 "current_admin_qpairs": 0, 00:10:10.721 "current_io_qpairs": 0, 00:10:10.721 "pending_bdev_io": 0, 00:10:10.721 "completed_nvme_io": 91, 00:10:10.721 "transports": [ 00:10:10.721 { 00:10:10.721 "trtype": "RDMA", 00:10:10.721 "pending_data_buffer": 0, 00:10:10.721 "devices": [ 00:10:10.721 { 00:10:10.721 "name": "mlx5_0", 00:10:10.721 "polls": 5294491, 00:10:10.721 "idle_polls": 5294237, 00:10:10.721 "completions": 290, 00:10:10.721 "requests": 145, 00:10:10.721 "request_latency": 23516364, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 236, 00:10:10.721 "send_doorbell_updates": 126, 00:10:10.721 "total_recv_wrs": 4241, 00:10:10.721 "recv_doorbell_updates": 127 00:10:10.721 }, 00:10:10.721 { 00:10:10.721 "name": "mlx5_1", 00:10:10.721 "polls": 5294491, 00:10:10.721 "idle_polls": 5294491, 00:10:10.721 "completions": 0, 00:10:10.721 "requests": 0, 00:10:10.721 "request_latency": 0, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 0, 00:10:10.721 "send_doorbell_updates": 0, 00:10:10.721 "total_recv_wrs": 4096, 00:10:10.721 "recv_doorbell_updates": 1 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 }, 00:10:10.721 { 00:10:10.721 "name": "nvmf_tgt_poll_group_002", 00:10:10.721 "admin_qpairs": 1, 00:10:10.721 "io_qpairs": 26, 00:10:10.721 "current_admin_qpairs": 0, 00:10:10.721 "current_io_qpairs": 0, 00:10:10.721 "pending_bdev_io": 0, 00:10:10.721 "completed_nvme_io": 77, 00:10:10.721 "transports": [ 00:10:10.721 { 00:10:10.721 "trtype": "RDMA", 00:10:10.721 "pending_data_buffer": 0, 00:10:10.721 "devices": [ 00:10:10.721 { 00:10:10.721 "name": "mlx5_0", 00:10:10.721 "polls": 5500963, 00:10:10.721 "idle_polls": 5500772, 00:10:10.721 "completions": 211, 00:10:10.721 "requests": 105, 00:10:10.721 "request_latency": 16978620, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 170, 00:10:10.721 "send_doorbell_updates": 93, 00:10:10.721 "total_recv_wrs": 4201, 00:10:10.721 "recv_doorbell_updates": 93 00:10:10.721 }, 00:10:10.721 { 00:10:10.721 "name": "mlx5_1", 00:10:10.721 "polls": 5500963, 00:10:10.721 "idle_polls": 5500963, 00:10:10.721 "completions": 0, 00:10:10.721 "requests": 0, 00:10:10.721 "request_latency": 0, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 0, 00:10:10.721 "send_doorbell_updates": 0, 00:10:10.721 "total_recv_wrs": 4096, 00:10:10.721 "recv_doorbell_updates": 1 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 }, 00:10:10.721 { 00:10:10.721 "name": "nvmf_tgt_poll_group_003", 00:10:10.721 "admin_qpairs": 2, 00:10:10.721 "io_qpairs": 26, 00:10:10.721 "current_admin_qpairs": 0, 00:10:10.721 "current_io_qpairs": 0, 00:10:10.721 "pending_bdev_io": 0, 00:10:10.721 "completed_nvme_io": 160, 00:10:10.721 "transports": [ 00:10:10.721 { 00:10:10.721 "trtype": "RDMA", 00:10:10.721 "pending_data_buffer": 0, 00:10:10.721 "devices": [ 00:10:10.721 { 00:10:10.721 "name": "mlx5_0", 00:10:10.721 "polls": 3498114, 00:10:10.721 "idle_polls": 3497737, 00:10:10.721 "completions": 426, 00:10:10.721 "requests": 213, 00:10:10.721 "request_latency": 45668752, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 372, 00:10:10.721 "send_doorbell_updates": 186, 00:10:10.721 "total_recv_wrs": 4309, 00:10:10.721 "recv_doorbell_updates": 187 00:10:10.721 }, 00:10:10.721 { 00:10:10.721 "name": "mlx5_1", 00:10:10.721 "polls": 3498114, 00:10:10.721 "idle_polls": 3498114, 00:10:10.721 "completions": 0, 00:10:10.721 "requests": 0, 00:10:10.721 "request_latency": 0, 00:10:10.721 "pending_free_request": 0, 00:10:10.721 "pending_rdma_read": 0, 00:10:10.721 "pending_rdma_write": 0, 00:10:10.721 "pending_rdma_send": 0, 00:10:10.721 "total_send_wrs": 0, 00:10:10.721 "send_doorbell_updates": 0, 00:10:10.721 "total_recv_wrs": 4096, 00:10:10.721 "recv_doorbell_updates": 1 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 } 00:10:10.721 ] 00:10:10.721 }' 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:10.721 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 115903440 > 0 )) 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:10.982 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:10.983 rmmod nvme_rdma 00:10:10.983 rmmod nvme_fabrics 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1685677 ']' 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1685677 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1685677 ']' 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1685677 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.983 14:52:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1685677 00:10:10.983 14:52:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:10.983 14:52:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:10.983 14:52:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1685677' 00:10:10.983 killing process with pid 1685677 00:10:10.983 14:52:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1685677 00:10:10.983 14:52:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1685677 00:10:11.243 14:52:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.243 14:52:27 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:11.243 00:10:11.243 real 0m44.616s 00:10:11.243 user 2m26.501s 00:10:11.243 sys 0m7.511s 00:10:11.243 14:52:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.243 14:52:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.243 ************************************ 00:10:11.243 END TEST nvmf_rpc 00:10:11.243 ************************************ 00:10:11.243 14:52:27 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:11.243 14:52:27 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:11.243 14:52:27 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.243 14:52:27 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.243 14:52:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:11.505 ************************************ 00:10:11.505 START TEST nvmf_invalid 00:10:11.505 ************************************ 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:11.505 * Looking for test storage... 00:10:11.505 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.505 14:52:27 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.506 14:52:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:19.648 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:19.648 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:19.648 Found net devices under 0000:98:00.0: mlx_0_0 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:19.648 Found net devices under 0000:98:00.1: mlx_0_1 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:19.648 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:19.649 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:19.649 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:19.649 altname enp152s0f0np0 00:10:19.649 altname ens817f0np0 00:10:19.649 inet 192.168.100.8/24 scope global mlx_0_0 00:10:19.649 valid_lft forever preferred_lft forever 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:19.649 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:19.649 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:19.649 altname enp152s0f1np1 00:10:19.649 altname ens817f1np1 00:10:19.649 inet 192.168.100.9/24 scope global mlx_0_1 00:10:19.649 valid_lft forever preferred_lft forever 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:19.649 192.168.100.9' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:19.649 192.168.100.9' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:19.649 192.168.100.9' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1697254 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1697254 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1697254 ']' 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.649 14:52:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 [2024-07-15 14:52:35.457404] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:19.649 [2024-07-15 14:52:35.457471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.649 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.649 [2024-07-15 14:52:35.528745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.649 [2024-07-15 14:52:35.602980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.649 [2024-07-15 14:52:35.603019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.649 [2024-07-15 14:52:35.603026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.649 [2024-07-15 14:52:35.603032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.649 [2024-07-15 14:52:35.603038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.649 [2024-07-15 14:52:35.603206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.649 [2024-07-15 14:52:35.603332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.649 [2024-07-15 14:52:35.603430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.649 [2024-07-15 14:52:35.603431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.220 14:52:36 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.220 14:52:36 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:20.220 14:52:36 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.220 14:52:36 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.220 14:52:36 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:20.220 14:52:36 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.220 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:20.221 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10899 00:10:20.479 [2024-07-15 14:52:36.421185] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:20.479 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:20.479 { 00:10:20.479 "nqn": "nqn.2016-06.io.spdk:cnode10899", 00:10:20.479 "tgt_name": "foobar", 00:10:20.480 "method": "nvmf_create_subsystem", 00:10:20.480 "req_id": 1 00:10:20.480 } 00:10:20.480 Got JSON-RPC error response 00:10:20.480 response: 00:10:20.480 { 00:10:20.480 "code": -32603, 00:10:20.480 "message": "Unable to find target foobar" 00:10:20.480 }' 00:10:20.480 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:20.480 { 00:10:20.480 "nqn": "nqn.2016-06.io.spdk:cnode10899", 00:10:20.480 "tgt_name": "foobar", 00:10:20.480 "method": "nvmf_create_subsystem", 00:10:20.480 "req_id": 1 00:10:20.480 } 00:10:20.480 Got JSON-RPC error response 00:10:20.480 response: 00:10:20.480 { 00:10:20.480 "code": -32603, 00:10:20.480 "message": "Unable to find target foobar" 00:10:20.480 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:20.480 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:20.480 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23936 00:10:20.738 [2024-07-15 14:52:36.601779] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23936: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:20.738 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:20.738 { 00:10:20.738 "nqn": "nqn.2016-06.io.spdk:cnode23936", 00:10:20.738 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:20.738 "method": "nvmf_create_subsystem", 00:10:20.738 "req_id": 1 00:10:20.738 } 00:10:20.738 Got JSON-RPC error response 00:10:20.738 response: 00:10:20.738 { 00:10:20.738 "code": -32602, 00:10:20.738 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:20.738 }' 00:10:20.738 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:20.738 { 00:10:20.738 "nqn": "nqn.2016-06.io.spdk:cnode23936", 00:10:20.738 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:20.738 "method": "nvmf_create_subsystem", 00:10:20.738 "req_id": 1 00:10:20.738 } 00:10:20.738 Got JSON-RPC error response 00:10:20.738 response: 00:10:20.738 { 00:10:20.738 "code": -32602, 00:10:20.738 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:20.738 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:20.738 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:20.738 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25649 00:10:20.738 [2024-07-15 14:52:36.778372] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25649: invalid model number 'SPDK_Controller' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:20.998 { 00:10:20.998 "nqn": "nqn.2016-06.io.spdk:cnode25649", 00:10:20.998 "model_number": "SPDK_Controller\u001f", 00:10:20.998 "method": "nvmf_create_subsystem", 00:10:20.998 "req_id": 1 00:10:20.998 } 00:10:20.998 Got JSON-RPC error response 00:10:20.998 response: 00:10:20.998 { 00:10:20.998 "code": -32602, 00:10:20.998 "message": "Invalid MN SPDK_Controller\u001f" 00:10:20.998 }' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:20.998 { 00:10:20.998 "nqn": "nqn.2016-06.io.spdk:cnode25649", 00:10:20.998 "model_number": "SPDK_Controller\u001f", 00:10:20.998 "method": "nvmf_create_subsystem", 00:10:20.998 "req_id": 1 00:10:20.998 } 00:10:20.998 Got JSON-RPC error response 00:10:20.998 response: 00:10:20.998 { 00:10:20.998 "code": -32602, 00:10:20.998 "message": "Invalid MN SPDK_Controller\u001f" 00:10:20.998 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:20.998 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'sNM`"P ~ _*D?oWE`8G;~' 00:10:20.999 14:52:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'sNM`"P ~ _*D?oWE`8G;~' nqn.2016-06.io.spdk:cnode30330 00:10:21.265 [2024-07-15 14:52:37.111425] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30330: invalid serial number 'sNM`"P ~ _*D?oWE`8G;~' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:21.265 { 00:10:21.265 "nqn": "nqn.2016-06.io.spdk:cnode30330", 00:10:21.265 "serial_number": "sNM`\"P ~ _*D?oWE`8G;~", 00:10:21.265 "method": "nvmf_create_subsystem", 00:10:21.265 "req_id": 1 00:10:21.265 } 00:10:21.265 Got JSON-RPC error response 00:10:21.265 response: 00:10:21.265 { 00:10:21.265 "code": -32602, 00:10:21.265 "message": "Invalid SN sNM`\"P ~ _*D?oWE`8G;~" 00:10:21.265 }' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:21.265 { 00:10:21.265 "nqn": "nqn.2016-06.io.spdk:cnode30330", 00:10:21.265 "serial_number": "sNM`\"P ~ _*D?oWE`8G;~", 00:10:21.265 "method": "nvmf_create_subsystem", 00:10:21.265 "req_id": 1 00:10:21.265 } 00:10:21.265 Got JSON-RPC error response 00:10:21.265 response: 00:10:21.265 { 00:10:21.265 "code": -32602, 00:10:21.265 "message": "Invalid SN sNM`\"P ~ _*D?oWE`8G;~" 00:10:21.265 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:21.265 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.266 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.528 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo ' .k&Qhp3r{mR2q9S8lb#+nIw%I27c/F]}X=jCygXo' 00:10:21.529 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ' .k&Qhp3r{mR2q9S8lb#+nIw%I27c/F]}X=jCygXo' nqn.2016-06.io.spdk:cnode4207 00:10:21.529 [2024-07-15 14:52:37.588959] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4207: invalid model number ' .k&Qhp3r{mR2q9S8lb#+nIw%I27c/F]}X=jCygXo' 00:10:21.788 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:21.788 { 00:10:21.788 "nqn": "nqn.2016-06.io.spdk:cnode4207", 00:10:21.788 "model_number": " .k&Qhp3r{mR2q9S8lb#+nIw%I27c/F]}X=jCygXo", 00:10:21.788 "method": "nvmf_create_subsystem", 00:10:21.788 "req_id": 1 00:10:21.788 } 00:10:21.788 Got JSON-RPC error response 00:10:21.788 response: 00:10:21.788 { 00:10:21.788 "code": -32602, 00:10:21.788 "message": "Invalid MN .k&Qhp3r{mR2q9S8lb#+nIw%I27c/F]}X=jCygXo" 00:10:21.788 }' 00:10:21.788 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:21.788 { 00:10:21.788 "nqn": "nqn.2016-06.io.spdk:cnode4207", 00:10:21.788 "model_number": " .k&Qhp3r{mR2q9S8lb#+nIw%I27c/F]}X=jCygXo", 00:10:21.788 "method": "nvmf_create_subsystem", 00:10:21.788 "req_id": 1 00:10:21.788 } 00:10:21.788 Got JSON-RPC error response 00:10:21.788 response: 00:10:21.788 { 00:10:21.788 "code": -32602, 00:10:21.788 "message": "Invalid MN .k&Qhp3r{mR2q9S8lb#+nIw%I27c/F]}X=jCygXo" 00:10:21.788 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:21.788 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:10:21.788 [2024-07-15 14:52:37.793120] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1563ad0/0x1567fc0) succeed. 00:10:21.788 [2024-07-15 14:52:37.806616] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1565110/0x15a9650) succeed. 00:10:22.047 14:52:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:10:22.307 192.168.100.9' 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:10:22.307 [2024-07-15 14:52:38.272797] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:22.307 { 00:10:22.307 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:22.307 "listen_address": { 00:10:22.307 "trtype": "rdma", 00:10:22.307 "traddr": "192.168.100.8", 00:10:22.307 "trsvcid": "4421" 00:10:22.307 }, 00:10:22.307 "method": "nvmf_subsystem_remove_listener", 00:10:22.307 "req_id": 1 00:10:22.307 } 00:10:22.307 Got JSON-RPC error response 00:10:22.307 response: 00:10:22.307 { 00:10:22.307 "code": -32602, 00:10:22.307 "message": "Invalid parameters" 00:10:22.307 }' 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:22.307 { 00:10:22.307 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:22.307 "listen_address": { 00:10:22.307 "trtype": "rdma", 00:10:22.307 "traddr": "192.168.100.8", 00:10:22.307 "trsvcid": "4421" 00:10:22.307 }, 00:10:22.307 "method": "nvmf_subsystem_remove_listener", 00:10:22.307 "req_id": 1 00:10:22.307 } 00:10:22.307 Got JSON-RPC error response 00:10:22.307 response: 00:10:22.307 { 00:10:22.307 "code": -32602, 00:10:22.307 "message": "Invalid parameters" 00:10:22.307 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:22.307 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26132 -i 0 00:10:22.567 [2024-07-15 14:52:38.441312] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26132: invalid cntlid range [0-65519] 00:10:22.567 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:22.567 { 00:10:22.567 "nqn": "nqn.2016-06.io.spdk:cnode26132", 00:10:22.567 "min_cntlid": 0, 00:10:22.567 "method": "nvmf_create_subsystem", 00:10:22.567 "req_id": 1 00:10:22.567 } 00:10:22.567 Got JSON-RPC error response 00:10:22.567 response: 00:10:22.567 { 00:10:22.567 "code": -32602, 00:10:22.567 "message": "Invalid cntlid range [0-65519]" 00:10:22.567 }' 00:10:22.567 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:22.567 { 00:10:22.567 "nqn": "nqn.2016-06.io.spdk:cnode26132", 00:10:22.567 "min_cntlid": 0, 00:10:22.567 "method": "nvmf_create_subsystem", 00:10:22.567 "req_id": 1 00:10:22.567 } 00:10:22.567 Got JSON-RPC error response 00:10:22.567 response: 00:10:22.567 { 00:10:22.567 "code": -32602, 00:10:22.567 "message": "Invalid cntlid range [0-65519]" 00:10:22.567 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:22.567 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31962 -i 65520 00:10:22.567 [2024-07-15 14:52:38.613902] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31962: invalid cntlid range [65520-65519] 00:10:22.851 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:22.851 { 00:10:22.851 "nqn": "nqn.2016-06.io.spdk:cnode31962", 00:10:22.851 "min_cntlid": 65520, 00:10:22.851 "method": "nvmf_create_subsystem", 00:10:22.851 "req_id": 1 00:10:22.851 } 00:10:22.851 Got JSON-RPC error response 00:10:22.851 response: 00:10:22.851 { 00:10:22.851 "code": -32602, 00:10:22.851 "message": "Invalid cntlid range [65520-65519]" 00:10:22.851 }' 00:10:22.851 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:22.851 { 00:10:22.851 "nqn": "nqn.2016-06.io.spdk:cnode31962", 00:10:22.851 "min_cntlid": 65520, 00:10:22.851 "method": "nvmf_create_subsystem", 00:10:22.851 "req_id": 1 00:10:22.851 } 00:10:22.851 Got JSON-RPC error response 00:10:22.851 response: 00:10:22.851 { 00:10:22.851 "code": -32602, 00:10:22.851 "message": "Invalid cntlid range [65520-65519]" 00:10:22.851 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:22.851 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2669 -I 0 00:10:22.851 [2024-07-15 14:52:38.774452] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2669: invalid cntlid range [1-0] 00:10:22.851 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:22.851 { 00:10:22.851 "nqn": "nqn.2016-06.io.spdk:cnode2669", 00:10:22.851 "max_cntlid": 0, 00:10:22.851 "method": "nvmf_create_subsystem", 00:10:22.851 "req_id": 1 00:10:22.851 } 00:10:22.851 Got JSON-RPC error response 00:10:22.851 response: 00:10:22.851 { 00:10:22.851 "code": -32602, 00:10:22.851 "message": "Invalid cntlid range [1-0]" 00:10:22.851 }' 00:10:22.851 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:22.851 { 00:10:22.851 "nqn": "nqn.2016-06.io.spdk:cnode2669", 00:10:22.851 "max_cntlid": 0, 00:10:22.851 "method": "nvmf_create_subsystem", 00:10:22.851 "req_id": 1 00:10:22.851 } 00:10:22.851 Got JSON-RPC error response 00:10:22.851 response: 00:10:22.851 { 00:10:22.851 "code": -32602, 00:10:22.851 "message": "Invalid cntlid range [1-0]" 00:10:22.851 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:22.851 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3313 -I 65520 00:10:23.139 [2024-07-15 14:52:38.931007] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3313: invalid cntlid range [1-65520] 00:10:23.139 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:23.139 { 00:10:23.139 "nqn": "nqn.2016-06.io.spdk:cnode3313", 00:10:23.139 "max_cntlid": 65520, 00:10:23.139 "method": "nvmf_create_subsystem", 00:10:23.139 "req_id": 1 00:10:23.139 } 00:10:23.139 Got JSON-RPC error response 00:10:23.139 response: 00:10:23.139 { 00:10:23.139 "code": -32602, 00:10:23.139 "message": "Invalid cntlid range [1-65520]" 00:10:23.139 }' 00:10:23.139 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:23.139 { 00:10:23.139 "nqn": "nqn.2016-06.io.spdk:cnode3313", 00:10:23.139 "max_cntlid": 65520, 00:10:23.139 "method": "nvmf_create_subsystem", 00:10:23.139 "req_id": 1 00:10:23.139 } 00:10:23.139 Got JSON-RPC error response 00:10:23.139 response: 00:10:23.139 { 00:10:23.139 "code": -32602, 00:10:23.139 "message": "Invalid cntlid range [1-65520]" 00:10:23.139 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.139 14:52:38 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9507 -i 6 -I 5 00:10:23.139 [2024-07-15 14:52:39.095629] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9507: invalid cntlid range [6-5] 00:10:23.139 14:52:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:23.139 { 00:10:23.139 "nqn": "nqn.2016-06.io.spdk:cnode9507", 00:10:23.139 "min_cntlid": 6, 00:10:23.139 "max_cntlid": 5, 00:10:23.139 "method": "nvmf_create_subsystem", 00:10:23.139 "req_id": 1 00:10:23.139 } 00:10:23.139 Got JSON-RPC error response 00:10:23.139 response: 00:10:23.139 { 00:10:23.139 "code": -32602, 00:10:23.139 "message": "Invalid cntlid range [6-5]" 00:10:23.139 }' 00:10:23.139 14:52:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:23.139 { 00:10:23.139 "nqn": "nqn.2016-06.io.spdk:cnode9507", 00:10:23.139 "min_cntlid": 6, 00:10:23.139 "max_cntlid": 5, 00:10:23.139 "method": "nvmf_create_subsystem", 00:10:23.139 "req_id": 1 00:10:23.139 } 00:10:23.139 Got JSON-RPC error response 00:10:23.139 response: 00:10:23.139 { 00:10:23.139 "code": -32602, 00:10:23.139 "message": "Invalid cntlid range [6-5]" 00:10:23.139 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.139 14:52:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:23.400 { 00:10:23.400 "name": "foobar", 00:10:23.400 "method": "nvmf_delete_target", 00:10:23.400 "req_id": 1 00:10:23.400 } 00:10:23.400 Got JSON-RPC error response 00:10:23.400 response: 00:10:23.400 { 00:10:23.400 "code": -32602, 00:10:23.400 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:23.400 }' 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:23.400 { 00:10:23.400 "name": "foobar", 00:10:23.400 "method": "nvmf_delete_target", 00:10:23.400 "req_id": 1 00:10:23.400 } 00:10:23.400 Got JSON-RPC error response 00:10:23.400 response: 00:10:23.400 { 00:10:23.400 "code": -32602, 00:10:23.400 "message": "The specified target doesn't exist, cannot delete it." 00:10:23.400 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:23.400 rmmod nvme_rdma 00:10:23.400 rmmod nvme_fabrics 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1697254 ']' 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1697254 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1697254 ']' 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1697254 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1697254 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1697254' 00:10:23.400 killing process with pid 1697254 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1697254 00:10:23.400 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1697254 00:10:23.661 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.661 14:52:39 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:23.661 00:10:23.661 real 0m12.229s 00:10:23.661 user 0m20.122s 00:10:23.661 sys 0m6.824s 00:10:23.661 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.661 14:52:39 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:23.661 ************************************ 00:10:23.661 END TEST nvmf_invalid 00:10:23.661 ************************************ 00:10:23.661 14:52:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:23.661 14:52:39 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:23.661 14:52:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.661 14:52:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.661 14:52:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:23.661 ************************************ 00:10:23.661 START TEST nvmf_abort 00:10:23.661 ************************************ 00:10:23.661 14:52:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:23.661 * Looking for test storage... 00:10:23.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:23.661 14:52:39 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.661 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:23.923 14:52:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:32.090 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:32.090 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:32.090 Found net devices under 0000:98:00.0: mlx_0_0 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:32.090 Found net devices under 0000:98:00.1: mlx_0_1 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.090 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:32.091 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:32.091 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:32.091 altname enp152s0f0np0 00:10:32.091 altname ens817f0np0 00:10:32.091 inet 192.168.100.8/24 scope global mlx_0_0 00:10:32.091 valid_lft forever preferred_lft forever 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:32.091 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:32.091 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:32.091 altname enp152s0f1np1 00:10:32.091 altname ens817f1np1 00:10:32.091 inet 192.168.100.9/24 scope global mlx_0_1 00:10:32.091 valid_lft forever preferred_lft forever 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:32.091 192.168.100.9' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:32.091 192.168.100.9' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:32.091 192.168.100.9' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1702481 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1702481 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1702481 ']' 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.091 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.092 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.092 14:52:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.092 [2024-07-15 14:52:47.866143] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:32.092 [2024-07-15 14:52:47.866210] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.092 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.092 [2024-07-15 14:52:47.954277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.092 [2024-07-15 14:52:48.049486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.092 [2024-07-15 14:52:48.049554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.092 [2024-07-15 14:52:48.049562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.092 [2024-07-15 14:52:48.049570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.092 [2024-07-15 14:52:48.049577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.092 [2024-07-15 14:52:48.049721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.092 [2024-07-15 14:52:48.049884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.092 [2024-07-15 14:52:48.049885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.666 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.927 [2024-07-15 14:52:48.730535] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b47920/0x1b4be10) succeed. 00:10:32.927 [2024-07-15 14:52:48.744775] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b48ec0/0x1b8d4a0) succeed. 00:10:32.927 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.927 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:32.927 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.927 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 Malloc0 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 Delay0 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 [2024-07-15 14:52:48.905317] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.928 14:52:48 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:32.928 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.188 [2024-07-15 14:52:49.015328] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:35.102 Initializing NVMe Controllers 00:10:35.102 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:35.102 controller IO queue size 128 less than required 00:10:35.102 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:35.102 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:35.102 Initialization complete. Launching workers. 00:10:35.102 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37683 00:10:35.102 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37744, failed to submit 62 00:10:35.102 success 37684, unsuccess 60, failed 0 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.102 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:35.102 rmmod nvme_rdma 00:10:35.364 rmmod nvme_fabrics 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1702481 ']' 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1702481 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1702481 ']' 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1702481 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1702481 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1702481' 00:10:35.364 killing process with pid 1702481 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1702481 00:10:35.364 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1702481 00:10:35.626 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.626 14:52:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:35.626 00:10:35.626 real 0m11.820s 00:10:35.626 user 0m14.728s 00:10:35.626 sys 0m6.388s 00:10:35.626 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.626 14:52:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.626 ************************************ 00:10:35.626 END TEST nvmf_abort 00:10:35.626 ************************************ 00:10:35.626 14:52:51 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:35.626 14:52:51 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:35.626 14:52:51 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.626 14:52:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.626 14:52:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:35.626 ************************************ 00:10:35.626 START TEST nvmf_ns_hotplug_stress 00:10:35.626 ************************************ 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:35.626 * Looking for test storage... 00:10:35.626 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.626 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.627 14:52:51 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:43.769 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:43.769 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:43.769 Found net devices under 0000:98:00.0: mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:43.769 Found net devices under 0000:98:00.1: mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:43.769 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:43.769 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:43.769 altname enp152s0f0np0 00:10:43.769 altname ens817f0np0 00:10:43.769 inet 192.168.100.8/24 scope global mlx_0_0 00:10:43.769 valid_lft forever preferred_lft forever 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:43.769 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:43.769 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:43.769 altname enp152s0f1np1 00:10:43.769 altname ens817f1np1 00:10:43.769 inet 192.168.100.9/24 scope global mlx_0_1 00:10:43.769 valid_lft forever preferred_lft forever 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:43.769 192.168.100.9' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:43.769 192.168.100.9' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:43.769 192.168.100.9' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1707472 00:10:43.769 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1707472 00:10:43.770 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:43.770 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1707472 ']' 00:10:43.770 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.770 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.770 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.770 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.770 14:52:59 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.770 [2024-07-15 14:52:59.550362] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:43.770 [2024-07-15 14:52:59.550430] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.770 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.770 [2024-07-15 14:52:59.641277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:43.770 [2024-07-15 14:52:59.734934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.770 [2024-07-15 14:52:59.734995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.770 [2024-07-15 14:52:59.735003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.770 [2024-07-15 14:52:59.735010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.770 [2024-07-15 14:52:59.735016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.770 [2024-07-15 14:52:59.735149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.770 [2024-07-15 14:52:59.735337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.770 [2024-07-15 14:52:59.735492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:44.342 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:44.602 [2024-07-15 14:53:00.548014] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2049920/0x204de10) succeed. 00:10:44.602 [2024-07-15 14:53:00.562554] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x204aec0/0x208f4a0) succeed. 00:10:44.863 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.863 14:53:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.124 [2024-07-15 14:53:00.974111] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:45.124 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:45.124 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:45.384 Malloc0 00:10:45.384 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:45.644 Delay0 00:10:45.644 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.644 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:45.905 NULL1 00:10:45.905 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:45.905 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:45.905 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1708016 00:10:45.905 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:45.905 14:53:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.166 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.166 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.428 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:46.428 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:46.428 [2024-07-15 14:53:02.430486] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:46.428 true 00:10:46.428 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:46.428 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.689 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.950 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:46.951 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:46.951 true 00:10:46.951 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:46.951 14:53:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.212 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.473 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:47.473 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:47.473 true 00:10:47.473 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:47.473 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.733 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.733 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:47.733 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:47.995 true 00:10:47.995 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:47.995 14:53:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.255 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.255 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:48.255 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:48.516 true 00:10:48.516 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:48.516 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.777 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.777 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:48.777 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:49.037 true 00:10:49.037 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:49.037 14:53:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.297 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.297 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:49.297 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:49.561 true 00:10:49.561 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:49.561 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.821 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.821 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:49.821 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:50.082 true 00:10:50.082 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:50.082 14:53:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.082 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.344 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:50.344 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:50.606 true 00:10:50.606 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:50.606 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.606 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.867 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:50.867 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:51.128 true 00:10:51.128 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:51.128 14:53:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.128 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.389 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:51.389 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:51.389 true 00:10:51.650 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:51.650 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.650 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.911 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:51.911 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:51.911 true 00:10:51.911 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:51.911 14:53:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.173 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.434 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:52.434 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:52.434 true 00:10:52.434 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:52.434 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.695 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.956 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:52.956 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:52.956 true 00:10:52.956 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:52.956 14:53:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.218 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.479 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:53.479 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:53.479 true 00:10:53.479 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:53.479 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.740 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.740 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:53.740 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:54.001 true 00:10:54.001 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:54.001 14:53:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.261 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.261 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:54.261 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:54.629 true 00:10:54.629 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:54.629 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.629 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.938 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:54.938 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:54.938 true 00:10:54.938 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:54.938 14:53:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.214 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.475 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:55.475 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:55.475 true 00:10:55.475 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:55.475 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.735 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.735 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:55.735 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:55.996 true 00:10:55.996 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:55.996 14:53:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.256 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.256 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:56.256 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:56.516 true 00:10:56.516 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:56.516 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.774 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.774 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:56.774 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:57.034 true 00:10:57.034 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:57.034 14:53:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.293 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.293 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:57.293 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:57.552 true 00:10:57.552 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:57.552 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.552 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.811 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:57.811 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:58.116 true 00:10:58.116 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:58.116 14:53:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.116 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.374 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:58.374 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:58.374 true 00:10:58.374 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:58.374 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.632 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.891 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:58.891 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:58.891 true 00:10:58.891 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:58.891 14:53:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.150 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.411 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:59.411 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:59.411 true 00:10:59.411 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:59.411 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.673 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.934 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:59.934 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:59.934 true 00:10:59.934 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:10:59.934 14:53:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.195 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.456 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:00.456 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:00.456 true 00:11:00.456 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:00.456 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.717 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.978 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:00.978 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:00.978 true 00:11:00.978 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:00.978 14:53:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.238 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.238 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:01.238 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:01.498 true 00:11:01.498 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:01.498 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.758 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.758 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:01.758 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:02.018 true 00:11:02.018 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:02.018 14:53:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.279 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.279 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:02.279 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:02.538 true 00:11:02.538 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:02.538 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.798 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.798 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:02.798 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:03.058 true 00:11:03.058 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:03.058 14:53:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.320 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.320 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:03.320 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:03.581 true 00:11:03.581 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:03.581 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.842 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.843 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:03.843 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:04.104 true 00:11:04.104 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:04.104 14:53:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.104 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.364 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:04.364 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:04.625 true 00:11:04.626 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:04.626 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.626 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.887 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:04.887 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:05.148 true 00:11:05.148 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:05.148 14:53:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.148 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.409 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:05.409 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:05.409 true 00:11:05.670 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:05.670 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.670 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.931 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:05.931 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:05.931 true 00:11:06.190 14:53:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:06.190 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.190 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.449 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:06.449 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:06.449 true 00:11:06.449 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:06.449 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.708 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.968 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:06.968 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:06.968 true 00:11:06.968 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:06.968 14:53:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.228 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.488 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:07.488 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:07.488 true 00:11:07.488 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:07.488 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.748 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.009 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:08.009 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:08.009 true 00:11:08.009 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:08.009 14:53:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.269 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.530 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:08.530 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:08.530 true 00:11:08.530 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:08.530 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.792 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.792 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:08.792 14:53:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:09.053 true 00:11:09.053 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:09.053 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.313 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.313 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:09.313 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:09.574 true 00:11:09.574 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:09.574 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.835 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.835 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:09.835 14:53:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:10.095 true 00:11:10.095 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:10.095 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.356 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.356 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:10.356 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:10.616 true 00:11:10.616 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:10.616 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.876 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.876 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:10.876 14:53:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:11.137 true 00:11:11.137 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:11.137 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.137 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.398 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:11.398 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:11.659 true 00:11:11.659 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:11.659 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.659 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.920 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:11:11.920 14:53:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:11:12.181 true 00:11:12.181 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:12.181 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.181 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.441 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:11:12.441 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:11:12.699 true 00:11:12.699 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:12.699 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.699 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.958 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:11:12.958 14:53:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:11:12.958 true 00:11:13.217 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:13.217 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.217 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.476 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:11:13.476 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:11:13.476 true 00:11:13.736 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:13.736 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.736 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.995 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:11:13.995 14:53:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:11:13.995 true 00:11:13.995 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:13.995 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.255 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.515 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:11:14.515 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:11:14.515 true 00:11:14.515 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:14.515 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.775 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.036 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:11:15.036 14:53:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:11:15.036 true 00:11:15.036 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:15.036 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.296 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.556 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:11:15.556 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:11:15.556 true 00:11:15.556 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:15.556 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.816 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.077 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:11:16.077 14:53:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:11:16.077 true 00:11:16.077 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:16.077 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.338 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.599 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:11:16.599 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:11:16.599 true 00:11:16.599 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:16.599 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.860 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.121 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:11:17.121 14:53:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:11:17.121 true 00:11:17.121 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:17.121 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.386 Initializing NVMe Controllers 00:11:17.386 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:17.386 Controller IO queue size 128, less than required. 00:11:17.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:17.386 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:17.386 Initialization complete. Launching workers. 00:11:17.386 ======================================================== 00:11:17.386 Latency(us) 00:11:17.386 Device Information : IOPS MiB/s Average min max 00:11:17.386 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48085.23 23.48 2661.76 1075.94 4814.51 00:11:17.386 ======================================================== 00:11:17.386 Total : 48085.23 23.48 2661.76 1075.94 4814.51 00:11:17.386 00:11:17.386 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.647 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:11:17.647 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:11:17.647 true 00:11:17.647 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1708016 00:11:17.647 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1708016) - No such process 00:11:17.647 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1708016 00:11:17.647 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.908 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.908 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:17.908 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:17.908 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:17.908 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:17.908 14:53:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:18.169 null0 00:11:18.169 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.169 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.169 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:18.429 null1 00:11:18.429 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.429 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.429 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:18.429 null2 00:11:18.429 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.429 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.429 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:18.691 null3 00:11:18.691 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.691 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.691 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:18.691 null4 00:11:18.997 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.997 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.997 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:18.997 null5 00:11:18.997 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.997 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.997 14:53:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:19.259 null6 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:19.259 null7 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.259 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1714723 1714725 1714727 1714729 1714730 1714732 1714734 1714736 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.260 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.522 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.783 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.784 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.784 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.784 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.784 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.784 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.784 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.045 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.307 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.568 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.569 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.831 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.093 14:53:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.093 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.094 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.355 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.617 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.879 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:22.141 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:22.141 14:53:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:22.141 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.402 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:22.403 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.664 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.665 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:22.927 rmmod nvme_rdma 00:11:22.927 rmmod nvme_fabrics 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1707472 ']' 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1707472 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1707472 ']' 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1707472 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1707472 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1707472' 00:11:22.927 killing process with pid 1707472 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1707472 00:11:22.927 14:53:38 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1707472 00:11:23.188 14:53:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.188 14:53:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:23.188 00:11:23.188 real 0m47.639s 00:11:23.188 user 3m21.934s 00:11:23.188 sys 0m15.218s 00:11:23.188 14:53:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.188 14:53:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.188 ************************************ 00:11:23.188 END TEST nvmf_ns_hotplug_stress 00:11:23.188 ************************************ 00:11:23.188 14:53:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:23.188 14:53:39 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:23.188 14:53:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.188 14:53:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.188 14:53:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:23.188 ************************************ 00:11:23.188 START TEST nvmf_connect_stress 00:11:23.188 ************************************ 00:11:23.188 14:53:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:23.450 * Looking for test storage... 00:11:23.450 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.450 14:53:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:31.598 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:31.598 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:31.598 Found net devices under 0000:98:00.0: mlx_0_0 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:31.598 Found net devices under 0000:98:00.1: mlx_0_1 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:31.598 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:31.599 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:31.599 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:31.599 altname enp152s0f0np0 00:11:31.599 altname ens817f0np0 00:11:31.599 inet 192.168.100.8/24 scope global mlx_0_0 00:11:31.599 valid_lft forever preferred_lft forever 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:31.599 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:31.599 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:31.599 altname enp152s0f1np1 00:11:31.599 altname ens817f1np1 00:11:31.599 inet 192.168.100.9/24 scope global mlx_0_1 00:11:31.599 valid_lft forever preferred_lft forever 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:31.599 192.168.100.9' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:31.599 192.168.100.9' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:31.599 192.168.100.9' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1719901 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1719901 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1719901 ']' 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.599 14:53:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.599 [2024-07-15 14:53:47.369907] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:31.599 [2024-07-15 14:53:47.369977] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.599 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.599 [2024-07-15 14:53:47.459295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.599 [2024-07-15 14:53:47.553177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.599 [2024-07-15 14:53:47.553243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.599 [2024-07-15 14:53:47.553252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.599 [2024-07-15 14:53:47.553259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.599 [2024-07-15 14:53:47.553265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.599 [2024-07-15 14:53:47.553388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.599 [2024-07-15 14:53:47.553557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.599 [2024-07-15 14:53:47.553557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.173 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.173 [2024-07-15 14:53:48.234684] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1806920/0x180ae10) succeed. 00:11:32.433 [2024-07-15 14:53:48.248074] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1807ec0/0x184c4a0) succeed. 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.433 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.434 [2024-07-15 14:53:48.362819] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.434 NULL1 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1720246 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.434 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.007 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.007 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:33.007 14:53:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.007 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.007 14:53:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.269 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.269 14:53:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:33.269 14:53:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.269 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.269 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.530 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.530 14:53:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:33.530 14:53:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.530 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.530 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.810 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.810 14:53:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:33.810 14:53:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.810 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.810 14:53:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.111 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.111 14:53:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:34.111 14:53:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.111 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.111 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.708 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.708 14:53:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:34.708 14:53:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.708 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.708 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.968 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.968 14:53:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:34.968 14:53:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.968 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.968 14:53:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.228 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.228 14:53:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:35.228 14:53:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.228 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.228 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.488 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.488 14:53:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:35.488 14:53:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.488 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.488 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.748 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.748 14:53:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:35.748 14:53:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.748 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.748 14:53:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.318 14:53:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:36.318 14:53:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.318 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.318 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.578 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.578 14:53:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:36.578 14:53:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.578 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.578 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.838 14:53:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:36.838 14:53:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.838 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.838 14:53:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.097 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.097 14:53:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:37.097 14:53:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.097 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.097 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.356 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.356 14:53:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:37.356 14:53:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.356 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.356 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.924 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.924 14:53:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:37.924 14:53:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.924 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.924 14:53:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.185 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.185 14:53:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:38.185 14:53:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.185 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.185 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.446 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.446 14:53:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:38.446 14:53:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.446 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.446 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.706 14:53:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:38.706 14:53:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.706 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.706 14:53:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.276 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.276 14:53:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:39.276 14:53:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.276 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.276 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.536 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.537 14:53:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:39.537 14:53:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.537 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.537 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.797 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.797 14:53:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:39.797 14:53:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.797 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.797 14:53:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.058 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.058 14:53:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:40.058 14:53:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.058 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.058 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.320 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.320 14:53:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:40.320 14:53:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.320 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.320 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.892 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.892 14:53:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:40.892 14:53:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.892 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.892 14:53:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.153 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.153 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:41.153 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.153 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.153 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.414 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.414 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:41.414 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.414 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.414 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.675 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.675 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:41.675 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.675 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.675 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.936 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.936 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:41.936 14:53:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.936 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.936 14:53:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.508 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.508 14:53:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:42.508 14:53:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.508 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.508 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.508 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1720246 00:11:42.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1720246) - No such process 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1720246 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:42.771 rmmod nvme_rdma 00:11:42.771 rmmod nvme_fabrics 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1719901 ']' 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1719901 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1719901 ']' 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1719901 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1719901 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1719901' 00:11:42.771 killing process with pid 1719901 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1719901 00:11:42.771 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1719901 00:11:43.032 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.032 14:53:58 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:43.032 00:11:43.032 real 0m19.704s 00:11:43.032 user 0m42.019s 00:11:43.032 sys 0m7.350s 00:11:43.032 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.032 14:53:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.032 ************************************ 00:11:43.032 END TEST nvmf_connect_stress 00:11:43.032 ************************************ 00:11:43.032 14:53:58 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:43.032 14:53:58 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:43.032 14:53:58 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:43.032 14:53:58 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.032 14:53:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:43.032 ************************************ 00:11:43.032 START TEST nvmf_fused_ordering 00:11:43.032 ************************************ 00:11:43.032 14:53:59 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:43.294 * Looking for test storage... 00:11:43.294 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.294 14:53:59 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:43.295 14:53:59 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:51.438 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:51.438 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:51.438 Found net devices under 0000:98:00.0: mlx_0_0 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:51.438 Found net devices under 0000:98:00.1: mlx_0_1 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:51.438 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:51.439 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.439 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:51.439 altname enp152s0f0np0 00:11:51.439 altname ens817f0np0 00:11:51.439 inet 192.168.100.8/24 scope global mlx_0_0 00:11:51.439 valid_lft forever preferred_lft forever 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:51.439 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.439 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:51.439 altname enp152s0f1np1 00:11:51.439 altname ens817f1np1 00:11:51.439 inet 192.168.100.9/24 scope global mlx_0_1 00:11:51.439 valid_lft forever preferred_lft forever 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:51.439 192.168.100.9' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:51.439 192.168.100.9' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:51.439 192.168.100.9' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1726724 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1726724 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1726724 ']' 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.439 14:54:07 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:51.439 [2024-07-15 14:54:07.328894] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:51.439 [2024-07-15 14:54:07.328963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.439 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.439 [2024-07-15 14:54:07.418572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.701 [2024-07-15 14:54:07.512103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.701 [2024-07-15 14:54:07.512188] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.701 [2024-07-15 14:54:07.512197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.701 [2024-07-15 14:54:07.512204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.701 [2024-07-15 14:54:07.512211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.701 [2024-07-15 14:54:07.512251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 [2024-07-15 14:54:08.195450] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1031360/0x1035850) succeed. 00:11:52.272 [2024-07-15 14:54:08.209131] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1032860/0x1076ee0) succeed. 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 [2024-07-15 14:54:08.282006] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 NULL1 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.272 14:54:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:52.532 [2024-07-15 14:54:08.352283] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:52.532 [2024-07-15 14:54:08.352350] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726860 ] 00:11:52.532 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.532 Attached to nqn.2016-06.io.spdk:cnode1 00:11:52.532 Namespace ID: 1 size: 1GB 00:11:52.532 fused_ordering(0) 00:11:52.532 fused_ordering(1) 00:11:52.532 fused_ordering(2) 00:11:52.532 fused_ordering(3) 00:11:52.532 fused_ordering(4) 00:11:52.532 fused_ordering(5) 00:11:52.532 fused_ordering(6) 00:11:52.532 fused_ordering(7) 00:11:52.532 fused_ordering(8) 00:11:52.532 fused_ordering(9) 00:11:52.532 fused_ordering(10) 00:11:52.532 fused_ordering(11) 00:11:52.532 fused_ordering(12) 00:11:52.532 fused_ordering(13) 00:11:52.532 fused_ordering(14) 00:11:52.532 fused_ordering(15) 00:11:52.532 fused_ordering(16) 00:11:52.532 fused_ordering(17) 00:11:52.532 fused_ordering(18) 00:11:52.532 fused_ordering(19) 00:11:52.532 fused_ordering(20) 00:11:52.532 fused_ordering(21) 00:11:52.532 fused_ordering(22) 00:11:52.532 fused_ordering(23) 00:11:52.532 fused_ordering(24) 00:11:52.532 fused_ordering(25) 00:11:52.532 fused_ordering(26) 00:11:52.532 fused_ordering(27) 00:11:52.532 fused_ordering(28) 00:11:52.532 fused_ordering(29) 00:11:52.532 fused_ordering(30) 00:11:52.532 fused_ordering(31) 00:11:52.532 fused_ordering(32) 00:11:52.532 fused_ordering(33) 00:11:52.532 fused_ordering(34) 00:11:52.532 fused_ordering(35) 00:11:52.532 fused_ordering(36) 00:11:52.532 fused_ordering(37) 00:11:52.532 fused_ordering(38) 00:11:52.532 fused_ordering(39) 00:11:52.532 fused_ordering(40) 00:11:52.532 fused_ordering(41) 00:11:52.532 fused_ordering(42) 00:11:52.532 fused_ordering(43) 00:11:52.532 fused_ordering(44) 00:11:52.532 fused_ordering(45) 00:11:52.532 fused_ordering(46) 00:11:52.532 fused_ordering(47) 00:11:52.532 fused_ordering(48) 00:11:52.532 fused_ordering(49) 00:11:52.532 fused_ordering(50) 00:11:52.532 fused_ordering(51) 00:11:52.532 fused_ordering(52) 00:11:52.532 fused_ordering(53) 00:11:52.532 fused_ordering(54) 00:11:52.532 fused_ordering(55) 00:11:52.532 fused_ordering(56) 00:11:52.532 fused_ordering(57) 00:11:52.532 fused_ordering(58) 00:11:52.532 fused_ordering(59) 00:11:52.532 fused_ordering(60) 00:11:52.532 fused_ordering(61) 00:11:52.532 fused_ordering(62) 00:11:52.532 fused_ordering(63) 00:11:52.532 fused_ordering(64) 00:11:52.532 fused_ordering(65) 00:11:52.532 fused_ordering(66) 00:11:52.532 fused_ordering(67) 00:11:52.532 fused_ordering(68) 00:11:52.532 fused_ordering(69) 00:11:52.532 fused_ordering(70) 00:11:52.532 fused_ordering(71) 00:11:52.532 fused_ordering(72) 00:11:52.532 fused_ordering(73) 00:11:52.532 fused_ordering(74) 00:11:52.532 fused_ordering(75) 00:11:52.532 fused_ordering(76) 00:11:52.532 fused_ordering(77) 00:11:52.532 fused_ordering(78) 00:11:52.532 fused_ordering(79) 00:11:52.532 fused_ordering(80) 00:11:52.532 fused_ordering(81) 00:11:52.532 fused_ordering(82) 00:11:52.533 fused_ordering(83) 00:11:52.533 fused_ordering(84) 00:11:52.533 fused_ordering(85) 00:11:52.533 fused_ordering(86) 00:11:52.533 fused_ordering(87) 00:11:52.533 fused_ordering(88) 00:11:52.533 fused_ordering(89) 00:11:52.533 fused_ordering(90) 00:11:52.533 fused_ordering(91) 00:11:52.533 fused_ordering(92) 00:11:52.533 fused_ordering(93) 00:11:52.533 fused_ordering(94) 00:11:52.533 fused_ordering(95) 00:11:52.533 fused_ordering(96) 00:11:52.533 fused_ordering(97) 00:11:52.533 fused_ordering(98) 00:11:52.533 fused_ordering(99) 00:11:52.533 fused_ordering(100) 00:11:52.533 fused_ordering(101) 00:11:52.533 fused_ordering(102) 00:11:52.533 fused_ordering(103) 00:11:52.533 fused_ordering(104) 00:11:52.533 fused_ordering(105) 00:11:52.533 fused_ordering(106) 00:11:52.533 fused_ordering(107) 00:11:52.533 fused_ordering(108) 00:11:52.533 fused_ordering(109) 00:11:52.533 fused_ordering(110) 00:11:52.533 fused_ordering(111) 00:11:52.533 fused_ordering(112) 00:11:52.533 fused_ordering(113) 00:11:52.533 fused_ordering(114) 00:11:52.533 fused_ordering(115) 00:11:52.533 fused_ordering(116) 00:11:52.533 fused_ordering(117) 00:11:52.533 fused_ordering(118) 00:11:52.533 fused_ordering(119) 00:11:52.533 fused_ordering(120) 00:11:52.533 fused_ordering(121) 00:11:52.533 fused_ordering(122) 00:11:52.533 fused_ordering(123) 00:11:52.533 fused_ordering(124) 00:11:52.533 fused_ordering(125) 00:11:52.533 fused_ordering(126) 00:11:52.533 fused_ordering(127) 00:11:52.533 fused_ordering(128) 00:11:52.533 fused_ordering(129) 00:11:52.533 fused_ordering(130) 00:11:52.533 fused_ordering(131) 00:11:52.533 fused_ordering(132) 00:11:52.533 fused_ordering(133) 00:11:52.533 fused_ordering(134) 00:11:52.533 fused_ordering(135) 00:11:52.533 fused_ordering(136) 00:11:52.533 fused_ordering(137) 00:11:52.533 fused_ordering(138) 00:11:52.533 fused_ordering(139) 00:11:52.533 fused_ordering(140) 00:11:52.533 fused_ordering(141) 00:11:52.533 fused_ordering(142) 00:11:52.533 fused_ordering(143) 00:11:52.533 fused_ordering(144) 00:11:52.533 fused_ordering(145) 00:11:52.533 fused_ordering(146) 00:11:52.533 fused_ordering(147) 00:11:52.533 fused_ordering(148) 00:11:52.533 fused_ordering(149) 00:11:52.533 fused_ordering(150) 00:11:52.533 fused_ordering(151) 00:11:52.533 fused_ordering(152) 00:11:52.533 fused_ordering(153) 00:11:52.533 fused_ordering(154) 00:11:52.533 fused_ordering(155) 00:11:52.533 fused_ordering(156) 00:11:52.533 fused_ordering(157) 00:11:52.533 fused_ordering(158) 00:11:52.533 fused_ordering(159) 00:11:52.533 fused_ordering(160) 00:11:52.533 fused_ordering(161) 00:11:52.533 fused_ordering(162) 00:11:52.533 fused_ordering(163) 00:11:52.533 fused_ordering(164) 00:11:52.533 fused_ordering(165) 00:11:52.533 fused_ordering(166) 00:11:52.533 fused_ordering(167) 00:11:52.533 fused_ordering(168) 00:11:52.533 fused_ordering(169) 00:11:52.533 fused_ordering(170) 00:11:52.533 fused_ordering(171) 00:11:52.533 fused_ordering(172) 00:11:52.533 fused_ordering(173) 00:11:52.533 fused_ordering(174) 00:11:52.533 fused_ordering(175) 00:11:52.533 fused_ordering(176) 00:11:52.533 fused_ordering(177) 00:11:52.533 fused_ordering(178) 00:11:52.533 fused_ordering(179) 00:11:52.533 fused_ordering(180) 00:11:52.533 fused_ordering(181) 00:11:52.533 fused_ordering(182) 00:11:52.533 fused_ordering(183) 00:11:52.533 fused_ordering(184) 00:11:52.533 fused_ordering(185) 00:11:52.533 fused_ordering(186) 00:11:52.533 fused_ordering(187) 00:11:52.533 fused_ordering(188) 00:11:52.533 fused_ordering(189) 00:11:52.533 fused_ordering(190) 00:11:52.533 fused_ordering(191) 00:11:52.533 fused_ordering(192) 00:11:52.533 fused_ordering(193) 00:11:52.533 fused_ordering(194) 00:11:52.533 fused_ordering(195) 00:11:52.533 fused_ordering(196) 00:11:52.533 fused_ordering(197) 00:11:52.533 fused_ordering(198) 00:11:52.533 fused_ordering(199) 00:11:52.533 fused_ordering(200) 00:11:52.533 fused_ordering(201) 00:11:52.533 fused_ordering(202) 00:11:52.533 fused_ordering(203) 00:11:52.533 fused_ordering(204) 00:11:52.533 fused_ordering(205) 00:11:52.793 fused_ordering(206) 00:11:52.793 fused_ordering(207) 00:11:52.793 fused_ordering(208) 00:11:52.793 fused_ordering(209) 00:11:52.793 fused_ordering(210) 00:11:52.793 fused_ordering(211) 00:11:52.793 fused_ordering(212) 00:11:52.793 fused_ordering(213) 00:11:52.793 fused_ordering(214) 00:11:52.793 fused_ordering(215) 00:11:52.793 fused_ordering(216) 00:11:52.793 fused_ordering(217) 00:11:52.793 fused_ordering(218) 00:11:52.793 fused_ordering(219) 00:11:52.793 fused_ordering(220) 00:11:52.793 fused_ordering(221) 00:11:52.793 fused_ordering(222) 00:11:52.793 fused_ordering(223) 00:11:52.793 fused_ordering(224) 00:11:52.793 fused_ordering(225) 00:11:52.793 fused_ordering(226) 00:11:52.793 fused_ordering(227) 00:11:52.793 fused_ordering(228) 00:11:52.793 fused_ordering(229) 00:11:52.793 fused_ordering(230) 00:11:52.793 fused_ordering(231) 00:11:52.793 fused_ordering(232) 00:11:52.793 fused_ordering(233) 00:11:52.793 fused_ordering(234) 00:11:52.793 fused_ordering(235) 00:11:52.793 fused_ordering(236) 00:11:52.793 fused_ordering(237) 00:11:52.793 fused_ordering(238) 00:11:52.793 fused_ordering(239) 00:11:52.793 fused_ordering(240) 00:11:52.793 fused_ordering(241) 00:11:52.793 fused_ordering(242) 00:11:52.793 fused_ordering(243) 00:11:52.793 fused_ordering(244) 00:11:52.793 fused_ordering(245) 00:11:52.793 fused_ordering(246) 00:11:52.793 fused_ordering(247) 00:11:52.793 fused_ordering(248) 00:11:52.793 fused_ordering(249) 00:11:52.793 fused_ordering(250) 00:11:52.793 fused_ordering(251) 00:11:52.793 fused_ordering(252) 00:11:52.793 fused_ordering(253) 00:11:52.793 fused_ordering(254) 00:11:52.793 fused_ordering(255) 00:11:52.793 fused_ordering(256) 00:11:52.793 fused_ordering(257) 00:11:52.793 fused_ordering(258) 00:11:52.793 fused_ordering(259) 00:11:52.793 fused_ordering(260) 00:11:52.793 fused_ordering(261) 00:11:52.793 fused_ordering(262) 00:11:52.793 fused_ordering(263) 00:11:52.793 fused_ordering(264) 00:11:52.793 fused_ordering(265) 00:11:52.793 fused_ordering(266) 00:11:52.793 fused_ordering(267) 00:11:52.793 fused_ordering(268) 00:11:52.793 fused_ordering(269) 00:11:52.794 fused_ordering(270) 00:11:52.794 fused_ordering(271) 00:11:52.794 fused_ordering(272) 00:11:52.794 fused_ordering(273) 00:11:52.794 fused_ordering(274) 00:11:52.794 fused_ordering(275) 00:11:52.794 fused_ordering(276) 00:11:52.794 fused_ordering(277) 00:11:52.794 fused_ordering(278) 00:11:52.794 fused_ordering(279) 00:11:52.794 fused_ordering(280) 00:11:52.794 fused_ordering(281) 00:11:52.794 fused_ordering(282) 00:11:52.794 fused_ordering(283) 00:11:52.794 fused_ordering(284) 00:11:52.794 fused_ordering(285) 00:11:52.794 fused_ordering(286) 00:11:52.794 fused_ordering(287) 00:11:52.794 fused_ordering(288) 00:11:52.794 fused_ordering(289) 00:11:52.794 fused_ordering(290) 00:11:52.794 fused_ordering(291) 00:11:52.794 fused_ordering(292) 00:11:52.794 fused_ordering(293) 00:11:52.794 fused_ordering(294) 00:11:52.794 fused_ordering(295) 00:11:52.794 fused_ordering(296) 00:11:52.794 fused_ordering(297) 00:11:52.794 fused_ordering(298) 00:11:52.794 fused_ordering(299) 00:11:52.794 fused_ordering(300) 00:11:52.794 fused_ordering(301) 00:11:52.794 fused_ordering(302) 00:11:52.794 fused_ordering(303) 00:11:52.794 fused_ordering(304) 00:11:52.794 fused_ordering(305) 00:11:52.794 fused_ordering(306) 00:11:52.794 fused_ordering(307) 00:11:52.794 fused_ordering(308) 00:11:52.794 fused_ordering(309) 00:11:52.794 fused_ordering(310) 00:11:52.794 fused_ordering(311) 00:11:52.794 fused_ordering(312) 00:11:52.794 fused_ordering(313) 00:11:52.794 fused_ordering(314) 00:11:52.794 fused_ordering(315) 00:11:52.794 fused_ordering(316) 00:11:52.794 fused_ordering(317) 00:11:52.794 fused_ordering(318) 00:11:52.794 fused_ordering(319) 00:11:52.794 fused_ordering(320) 00:11:52.794 fused_ordering(321) 00:11:52.794 fused_ordering(322) 00:11:52.794 fused_ordering(323) 00:11:52.794 fused_ordering(324) 00:11:52.794 fused_ordering(325) 00:11:52.794 fused_ordering(326) 00:11:52.794 fused_ordering(327) 00:11:52.794 fused_ordering(328) 00:11:52.794 fused_ordering(329) 00:11:52.794 fused_ordering(330) 00:11:52.794 fused_ordering(331) 00:11:52.794 fused_ordering(332) 00:11:52.794 fused_ordering(333) 00:11:52.794 fused_ordering(334) 00:11:52.794 fused_ordering(335) 00:11:52.794 fused_ordering(336) 00:11:52.794 fused_ordering(337) 00:11:52.794 fused_ordering(338) 00:11:52.794 fused_ordering(339) 00:11:52.794 fused_ordering(340) 00:11:52.794 fused_ordering(341) 00:11:52.794 fused_ordering(342) 00:11:52.794 fused_ordering(343) 00:11:52.794 fused_ordering(344) 00:11:52.794 fused_ordering(345) 00:11:52.794 fused_ordering(346) 00:11:52.794 fused_ordering(347) 00:11:52.794 fused_ordering(348) 00:11:52.794 fused_ordering(349) 00:11:52.794 fused_ordering(350) 00:11:52.794 fused_ordering(351) 00:11:52.794 fused_ordering(352) 00:11:52.794 fused_ordering(353) 00:11:52.794 fused_ordering(354) 00:11:52.794 fused_ordering(355) 00:11:52.794 fused_ordering(356) 00:11:52.794 fused_ordering(357) 00:11:52.794 fused_ordering(358) 00:11:52.794 fused_ordering(359) 00:11:52.794 fused_ordering(360) 00:11:52.794 fused_ordering(361) 00:11:52.794 fused_ordering(362) 00:11:52.794 fused_ordering(363) 00:11:52.794 fused_ordering(364) 00:11:52.794 fused_ordering(365) 00:11:52.794 fused_ordering(366) 00:11:52.794 fused_ordering(367) 00:11:52.794 fused_ordering(368) 00:11:52.794 fused_ordering(369) 00:11:52.794 fused_ordering(370) 00:11:52.794 fused_ordering(371) 00:11:52.794 fused_ordering(372) 00:11:52.794 fused_ordering(373) 00:11:52.794 fused_ordering(374) 00:11:52.794 fused_ordering(375) 00:11:52.794 fused_ordering(376) 00:11:52.794 fused_ordering(377) 00:11:52.794 fused_ordering(378) 00:11:52.794 fused_ordering(379) 00:11:52.794 fused_ordering(380) 00:11:52.794 fused_ordering(381) 00:11:52.794 fused_ordering(382) 00:11:52.794 fused_ordering(383) 00:11:52.794 fused_ordering(384) 00:11:52.794 fused_ordering(385) 00:11:52.794 fused_ordering(386) 00:11:52.794 fused_ordering(387) 00:11:52.794 fused_ordering(388) 00:11:52.794 fused_ordering(389) 00:11:52.794 fused_ordering(390) 00:11:52.794 fused_ordering(391) 00:11:52.794 fused_ordering(392) 00:11:52.794 fused_ordering(393) 00:11:52.794 fused_ordering(394) 00:11:52.794 fused_ordering(395) 00:11:52.794 fused_ordering(396) 00:11:52.794 fused_ordering(397) 00:11:52.794 fused_ordering(398) 00:11:52.794 fused_ordering(399) 00:11:52.794 fused_ordering(400) 00:11:52.794 fused_ordering(401) 00:11:52.794 fused_ordering(402) 00:11:52.794 fused_ordering(403) 00:11:52.794 fused_ordering(404) 00:11:52.794 fused_ordering(405) 00:11:52.794 fused_ordering(406) 00:11:52.794 fused_ordering(407) 00:11:52.794 fused_ordering(408) 00:11:52.794 fused_ordering(409) 00:11:52.794 fused_ordering(410) 00:11:52.794 fused_ordering(411) 00:11:52.794 fused_ordering(412) 00:11:52.794 fused_ordering(413) 00:11:52.794 fused_ordering(414) 00:11:52.794 fused_ordering(415) 00:11:52.794 fused_ordering(416) 00:11:52.794 fused_ordering(417) 00:11:52.794 fused_ordering(418) 00:11:52.794 fused_ordering(419) 00:11:52.794 fused_ordering(420) 00:11:52.794 fused_ordering(421) 00:11:52.794 fused_ordering(422) 00:11:52.794 fused_ordering(423) 00:11:52.794 fused_ordering(424) 00:11:52.794 fused_ordering(425) 00:11:52.794 fused_ordering(426) 00:11:52.794 fused_ordering(427) 00:11:52.794 fused_ordering(428) 00:11:52.794 fused_ordering(429) 00:11:52.794 fused_ordering(430) 00:11:52.794 fused_ordering(431) 00:11:52.794 fused_ordering(432) 00:11:52.794 fused_ordering(433) 00:11:52.794 fused_ordering(434) 00:11:52.794 fused_ordering(435) 00:11:52.794 fused_ordering(436) 00:11:52.794 fused_ordering(437) 00:11:52.794 fused_ordering(438) 00:11:52.794 fused_ordering(439) 00:11:52.794 fused_ordering(440) 00:11:52.794 fused_ordering(441) 00:11:52.794 fused_ordering(442) 00:11:52.794 fused_ordering(443) 00:11:52.794 fused_ordering(444) 00:11:52.794 fused_ordering(445) 00:11:52.794 fused_ordering(446) 00:11:52.794 fused_ordering(447) 00:11:52.794 fused_ordering(448) 00:11:52.794 fused_ordering(449) 00:11:52.794 fused_ordering(450) 00:11:52.794 fused_ordering(451) 00:11:52.794 fused_ordering(452) 00:11:52.794 fused_ordering(453) 00:11:52.794 fused_ordering(454) 00:11:52.794 fused_ordering(455) 00:11:52.794 fused_ordering(456) 00:11:52.794 fused_ordering(457) 00:11:52.794 fused_ordering(458) 00:11:52.794 fused_ordering(459) 00:11:52.794 fused_ordering(460) 00:11:52.794 fused_ordering(461) 00:11:52.794 fused_ordering(462) 00:11:52.794 fused_ordering(463) 00:11:52.794 fused_ordering(464) 00:11:52.794 fused_ordering(465) 00:11:52.794 fused_ordering(466) 00:11:52.794 fused_ordering(467) 00:11:52.794 fused_ordering(468) 00:11:52.794 fused_ordering(469) 00:11:52.794 fused_ordering(470) 00:11:52.794 fused_ordering(471) 00:11:52.794 fused_ordering(472) 00:11:52.794 fused_ordering(473) 00:11:52.794 fused_ordering(474) 00:11:52.794 fused_ordering(475) 00:11:52.794 fused_ordering(476) 00:11:52.794 fused_ordering(477) 00:11:52.794 fused_ordering(478) 00:11:52.794 fused_ordering(479) 00:11:52.794 fused_ordering(480) 00:11:52.794 fused_ordering(481) 00:11:52.794 fused_ordering(482) 00:11:52.794 fused_ordering(483) 00:11:52.794 fused_ordering(484) 00:11:52.794 fused_ordering(485) 00:11:52.794 fused_ordering(486) 00:11:52.794 fused_ordering(487) 00:11:52.794 fused_ordering(488) 00:11:52.794 fused_ordering(489) 00:11:52.794 fused_ordering(490) 00:11:52.794 fused_ordering(491) 00:11:52.794 fused_ordering(492) 00:11:52.794 fused_ordering(493) 00:11:52.794 fused_ordering(494) 00:11:52.794 fused_ordering(495) 00:11:52.794 fused_ordering(496) 00:11:52.794 fused_ordering(497) 00:11:52.794 fused_ordering(498) 00:11:52.794 fused_ordering(499) 00:11:52.794 fused_ordering(500) 00:11:52.794 fused_ordering(501) 00:11:52.794 fused_ordering(502) 00:11:52.794 fused_ordering(503) 00:11:52.794 fused_ordering(504) 00:11:52.794 fused_ordering(505) 00:11:52.794 fused_ordering(506) 00:11:52.794 fused_ordering(507) 00:11:52.794 fused_ordering(508) 00:11:52.794 fused_ordering(509) 00:11:52.794 fused_ordering(510) 00:11:52.794 fused_ordering(511) 00:11:52.794 fused_ordering(512) 00:11:52.794 fused_ordering(513) 00:11:52.794 fused_ordering(514) 00:11:52.794 fused_ordering(515) 00:11:52.794 fused_ordering(516) 00:11:52.794 fused_ordering(517) 00:11:52.794 fused_ordering(518) 00:11:52.794 fused_ordering(519) 00:11:52.794 fused_ordering(520) 00:11:52.794 fused_ordering(521) 00:11:52.794 fused_ordering(522) 00:11:52.794 fused_ordering(523) 00:11:52.794 fused_ordering(524) 00:11:52.794 fused_ordering(525) 00:11:52.794 fused_ordering(526) 00:11:52.794 fused_ordering(527) 00:11:52.794 fused_ordering(528) 00:11:52.794 fused_ordering(529) 00:11:52.794 fused_ordering(530) 00:11:52.794 fused_ordering(531) 00:11:52.794 fused_ordering(532) 00:11:52.794 fused_ordering(533) 00:11:52.794 fused_ordering(534) 00:11:52.794 fused_ordering(535) 00:11:52.794 fused_ordering(536) 00:11:52.794 fused_ordering(537) 00:11:52.794 fused_ordering(538) 00:11:52.794 fused_ordering(539) 00:11:52.794 fused_ordering(540) 00:11:52.794 fused_ordering(541) 00:11:52.794 fused_ordering(542) 00:11:52.794 fused_ordering(543) 00:11:52.794 fused_ordering(544) 00:11:52.794 fused_ordering(545) 00:11:52.794 fused_ordering(546) 00:11:52.794 fused_ordering(547) 00:11:52.794 fused_ordering(548) 00:11:52.794 fused_ordering(549) 00:11:52.794 fused_ordering(550) 00:11:52.794 fused_ordering(551) 00:11:52.794 fused_ordering(552) 00:11:52.794 fused_ordering(553) 00:11:52.794 fused_ordering(554) 00:11:52.794 fused_ordering(555) 00:11:52.794 fused_ordering(556) 00:11:52.794 fused_ordering(557) 00:11:52.794 fused_ordering(558) 00:11:52.794 fused_ordering(559) 00:11:52.794 fused_ordering(560) 00:11:52.794 fused_ordering(561) 00:11:52.794 fused_ordering(562) 00:11:52.794 fused_ordering(563) 00:11:52.794 fused_ordering(564) 00:11:52.794 fused_ordering(565) 00:11:52.794 fused_ordering(566) 00:11:52.794 fused_ordering(567) 00:11:52.794 fused_ordering(568) 00:11:52.794 fused_ordering(569) 00:11:52.794 fused_ordering(570) 00:11:52.794 fused_ordering(571) 00:11:52.794 fused_ordering(572) 00:11:52.794 fused_ordering(573) 00:11:52.794 fused_ordering(574) 00:11:52.794 fused_ordering(575) 00:11:52.794 fused_ordering(576) 00:11:52.794 fused_ordering(577) 00:11:52.794 fused_ordering(578) 00:11:52.794 fused_ordering(579) 00:11:52.794 fused_ordering(580) 00:11:52.794 fused_ordering(581) 00:11:52.794 fused_ordering(582) 00:11:52.794 fused_ordering(583) 00:11:52.794 fused_ordering(584) 00:11:52.794 fused_ordering(585) 00:11:52.794 fused_ordering(586) 00:11:52.794 fused_ordering(587) 00:11:52.794 fused_ordering(588) 00:11:52.794 fused_ordering(589) 00:11:52.794 fused_ordering(590) 00:11:52.794 fused_ordering(591) 00:11:52.794 fused_ordering(592) 00:11:52.794 fused_ordering(593) 00:11:52.794 fused_ordering(594) 00:11:52.794 fused_ordering(595) 00:11:52.794 fused_ordering(596) 00:11:52.794 fused_ordering(597) 00:11:52.794 fused_ordering(598) 00:11:52.794 fused_ordering(599) 00:11:52.794 fused_ordering(600) 00:11:52.794 fused_ordering(601) 00:11:52.794 fused_ordering(602) 00:11:52.794 fused_ordering(603) 00:11:52.794 fused_ordering(604) 00:11:52.794 fused_ordering(605) 00:11:52.794 fused_ordering(606) 00:11:52.794 fused_ordering(607) 00:11:52.794 fused_ordering(608) 00:11:52.794 fused_ordering(609) 00:11:52.794 fused_ordering(610) 00:11:52.794 fused_ordering(611) 00:11:52.794 fused_ordering(612) 00:11:52.794 fused_ordering(613) 00:11:52.794 fused_ordering(614) 00:11:52.794 fused_ordering(615) 00:11:53.054 fused_ordering(616) 00:11:53.054 fused_ordering(617) 00:11:53.054 fused_ordering(618) 00:11:53.054 fused_ordering(619) 00:11:53.054 fused_ordering(620) 00:11:53.054 fused_ordering(621) 00:11:53.054 fused_ordering(622) 00:11:53.054 fused_ordering(623) 00:11:53.054 fused_ordering(624) 00:11:53.054 fused_ordering(625) 00:11:53.054 fused_ordering(626) 00:11:53.055 fused_ordering(627) 00:11:53.055 fused_ordering(628) 00:11:53.055 fused_ordering(629) 00:11:53.055 fused_ordering(630) 00:11:53.055 fused_ordering(631) 00:11:53.055 fused_ordering(632) 00:11:53.055 fused_ordering(633) 00:11:53.055 fused_ordering(634) 00:11:53.055 fused_ordering(635) 00:11:53.055 fused_ordering(636) 00:11:53.055 fused_ordering(637) 00:11:53.055 fused_ordering(638) 00:11:53.055 fused_ordering(639) 00:11:53.055 fused_ordering(640) 00:11:53.055 fused_ordering(641) 00:11:53.055 fused_ordering(642) 00:11:53.055 fused_ordering(643) 00:11:53.055 fused_ordering(644) 00:11:53.055 fused_ordering(645) 00:11:53.055 fused_ordering(646) 00:11:53.055 fused_ordering(647) 00:11:53.055 fused_ordering(648) 00:11:53.055 fused_ordering(649) 00:11:53.055 fused_ordering(650) 00:11:53.055 fused_ordering(651) 00:11:53.055 fused_ordering(652) 00:11:53.055 fused_ordering(653) 00:11:53.055 fused_ordering(654) 00:11:53.055 fused_ordering(655) 00:11:53.055 fused_ordering(656) 00:11:53.055 fused_ordering(657) 00:11:53.055 fused_ordering(658) 00:11:53.055 fused_ordering(659) 00:11:53.055 fused_ordering(660) 00:11:53.055 fused_ordering(661) 00:11:53.055 fused_ordering(662) 00:11:53.055 fused_ordering(663) 00:11:53.055 fused_ordering(664) 00:11:53.055 fused_ordering(665) 00:11:53.055 fused_ordering(666) 00:11:53.055 fused_ordering(667) 00:11:53.055 fused_ordering(668) 00:11:53.055 fused_ordering(669) 00:11:53.055 fused_ordering(670) 00:11:53.055 fused_ordering(671) 00:11:53.055 fused_ordering(672) 00:11:53.055 fused_ordering(673) 00:11:53.055 fused_ordering(674) 00:11:53.055 fused_ordering(675) 00:11:53.055 fused_ordering(676) 00:11:53.055 fused_ordering(677) 00:11:53.055 fused_ordering(678) 00:11:53.055 fused_ordering(679) 00:11:53.055 fused_ordering(680) 00:11:53.055 fused_ordering(681) 00:11:53.055 fused_ordering(682) 00:11:53.055 fused_ordering(683) 00:11:53.055 fused_ordering(684) 00:11:53.055 fused_ordering(685) 00:11:53.055 fused_ordering(686) 00:11:53.055 fused_ordering(687) 00:11:53.055 fused_ordering(688) 00:11:53.055 fused_ordering(689) 00:11:53.055 fused_ordering(690) 00:11:53.055 fused_ordering(691) 00:11:53.055 fused_ordering(692) 00:11:53.055 fused_ordering(693) 00:11:53.055 fused_ordering(694) 00:11:53.055 fused_ordering(695) 00:11:53.055 fused_ordering(696) 00:11:53.055 fused_ordering(697) 00:11:53.055 fused_ordering(698) 00:11:53.055 fused_ordering(699) 00:11:53.055 fused_ordering(700) 00:11:53.055 fused_ordering(701) 00:11:53.055 fused_ordering(702) 00:11:53.055 fused_ordering(703) 00:11:53.055 fused_ordering(704) 00:11:53.055 fused_ordering(705) 00:11:53.055 fused_ordering(706) 00:11:53.055 fused_ordering(707) 00:11:53.055 fused_ordering(708) 00:11:53.055 fused_ordering(709) 00:11:53.055 fused_ordering(710) 00:11:53.055 fused_ordering(711) 00:11:53.055 fused_ordering(712) 00:11:53.055 fused_ordering(713) 00:11:53.055 fused_ordering(714) 00:11:53.055 fused_ordering(715) 00:11:53.055 fused_ordering(716) 00:11:53.055 fused_ordering(717) 00:11:53.055 fused_ordering(718) 00:11:53.055 fused_ordering(719) 00:11:53.055 fused_ordering(720) 00:11:53.055 fused_ordering(721) 00:11:53.055 fused_ordering(722) 00:11:53.055 fused_ordering(723) 00:11:53.055 fused_ordering(724) 00:11:53.055 fused_ordering(725) 00:11:53.055 fused_ordering(726) 00:11:53.055 fused_ordering(727) 00:11:53.055 fused_ordering(728) 00:11:53.055 fused_ordering(729) 00:11:53.055 fused_ordering(730) 00:11:53.055 fused_ordering(731) 00:11:53.055 fused_ordering(732) 00:11:53.055 fused_ordering(733) 00:11:53.055 fused_ordering(734) 00:11:53.055 fused_ordering(735) 00:11:53.055 fused_ordering(736) 00:11:53.055 fused_ordering(737) 00:11:53.055 fused_ordering(738) 00:11:53.055 fused_ordering(739) 00:11:53.055 fused_ordering(740) 00:11:53.055 fused_ordering(741) 00:11:53.055 fused_ordering(742) 00:11:53.055 fused_ordering(743) 00:11:53.055 fused_ordering(744) 00:11:53.055 fused_ordering(745) 00:11:53.055 fused_ordering(746) 00:11:53.055 fused_ordering(747) 00:11:53.055 fused_ordering(748) 00:11:53.055 fused_ordering(749) 00:11:53.055 fused_ordering(750) 00:11:53.055 fused_ordering(751) 00:11:53.055 fused_ordering(752) 00:11:53.055 fused_ordering(753) 00:11:53.055 fused_ordering(754) 00:11:53.055 fused_ordering(755) 00:11:53.055 fused_ordering(756) 00:11:53.055 fused_ordering(757) 00:11:53.055 fused_ordering(758) 00:11:53.055 fused_ordering(759) 00:11:53.055 fused_ordering(760) 00:11:53.055 fused_ordering(761) 00:11:53.055 fused_ordering(762) 00:11:53.055 fused_ordering(763) 00:11:53.055 fused_ordering(764) 00:11:53.055 fused_ordering(765) 00:11:53.055 fused_ordering(766) 00:11:53.055 fused_ordering(767) 00:11:53.055 fused_ordering(768) 00:11:53.055 fused_ordering(769) 00:11:53.055 fused_ordering(770) 00:11:53.055 fused_ordering(771) 00:11:53.055 fused_ordering(772) 00:11:53.055 fused_ordering(773) 00:11:53.055 fused_ordering(774) 00:11:53.055 fused_ordering(775) 00:11:53.055 fused_ordering(776) 00:11:53.055 fused_ordering(777) 00:11:53.055 fused_ordering(778) 00:11:53.055 fused_ordering(779) 00:11:53.055 fused_ordering(780) 00:11:53.055 fused_ordering(781) 00:11:53.055 fused_ordering(782) 00:11:53.055 fused_ordering(783) 00:11:53.055 fused_ordering(784) 00:11:53.055 fused_ordering(785) 00:11:53.055 fused_ordering(786) 00:11:53.055 fused_ordering(787) 00:11:53.055 fused_ordering(788) 00:11:53.055 fused_ordering(789) 00:11:53.055 fused_ordering(790) 00:11:53.055 fused_ordering(791) 00:11:53.055 fused_ordering(792) 00:11:53.055 fused_ordering(793) 00:11:53.055 fused_ordering(794) 00:11:53.055 fused_ordering(795) 00:11:53.055 fused_ordering(796) 00:11:53.055 fused_ordering(797) 00:11:53.055 fused_ordering(798) 00:11:53.055 fused_ordering(799) 00:11:53.055 fused_ordering(800) 00:11:53.055 fused_ordering(801) 00:11:53.055 fused_ordering(802) 00:11:53.055 fused_ordering(803) 00:11:53.055 fused_ordering(804) 00:11:53.055 fused_ordering(805) 00:11:53.055 fused_ordering(806) 00:11:53.055 fused_ordering(807) 00:11:53.055 fused_ordering(808) 00:11:53.055 fused_ordering(809) 00:11:53.055 fused_ordering(810) 00:11:53.055 fused_ordering(811) 00:11:53.055 fused_ordering(812) 00:11:53.055 fused_ordering(813) 00:11:53.055 fused_ordering(814) 00:11:53.055 fused_ordering(815) 00:11:53.055 fused_ordering(816) 00:11:53.055 fused_ordering(817) 00:11:53.055 fused_ordering(818) 00:11:53.055 fused_ordering(819) 00:11:53.055 fused_ordering(820) 00:11:53.316 fused_ordering(821) 00:11:53.316 fused_ordering(822) 00:11:53.316 fused_ordering(823) 00:11:53.316 fused_ordering(824) 00:11:53.316 fused_ordering(825) 00:11:53.316 fused_ordering(826) 00:11:53.316 fused_ordering(827) 00:11:53.316 fused_ordering(828) 00:11:53.316 fused_ordering(829) 00:11:53.316 fused_ordering(830) 00:11:53.316 fused_ordering(831) 00:11:53.316 fused_ordering(832) 00:11:53.316 fused_ordering(833) 00:11:53.316 fused_ordering(834) 00:11:53.316 fused_ordering(835) 00:11:53.316 fused_ordering(836) 00:11:53.316 fused_ordering(837) 00:11:53.316 fused_ordering(838) 00:11:53.316 fused_ordering(839) 00:11:53.316 fused_ordering(840) 00:11:53.316 fused_ordering(841) 00:11:53.316 fused_ordering(842) 00:11:53.316 fused_ordering(843) 00:11:53.316 fused_ordering(844) 00:11:53.316 fused_ordering(845) 00:11:53.316 fused_ordering(846) 00:11:53.316 fused_ordering(847) 00:11:53.316 fused_ordering(848) 00:11:53.316 fused_ordering(849) 00:11:53.316 fused_ordering(850) 00:11:53.316 fused_ordering(851) 00:11:53.316 fused_ordering(852) 00:11:53.316 fused_ordering(853) 00:11:53.316 fused_ordering(854) 00:11:53.316 fused_ordering(855) 00:11:53.316 fused_ordering(856) 00:11:53.316 fused_ordering(857) 00:11:53.316 fused_ordering(858) 00:11:53.316 fused_ordering(859) 00:11:53.316 fused_ordering(860) 00:11:53.316 fused_ordering(861) 00:11:53.316 fused_ordering(862) 00:11:53.316 fused_ordering(863) 00:11:53.316 fused_ordering(864) 00:11:53.316 fused_ordering(865) 00:11:53.316 fused_ordering(866) 00:11:53.316 fused_ordering(867) 00:11:53.316 fused_ordering(868) 00:11:53.316 fused_ordering(869) 00:11:53.316 fused_ordering(870) 00:11:53.316 fused_ordering(871) 00:11:53.316 fused_ordering(872) 00:11:53.316 fused_ordering(873) 00:11:53.316 fused_ordering(874) 00:11:53.317 fused_ordering(875) 00:11:53.317 fused_ordering(876) 00:11:53.317 fused_ordering(877) 00:11:53.317 fused_ordering(878) 00:11:53.317 fused_ordering(879) 00:11:53.317 fused_ordering(880) 00:11:53.317 fused_ordering(881) 00:11:53.317 fused_ordering(882) 00:11:53.317 fused_ordering(883) 00:11:53.317 fused_ordering(884) 00:11:53.317 fused_ordering(885) 00:11:53.317 fused_ordering(886) 00:11:53.317 fused_ordering(887) 00:11:53.317 fused_ordering(888) 00:11:53.317 fused_ordering(889) 00:11:53.317 fused_ordering(890) 00:11:53.317 fused_ordering(891) 00:11:53.317 fused_ordering(892) 00:11:53.317 fused_ordering(893) 00:11:53.317 fused_ordering(894) 00:11:53.317 fused_ordering(895) 00:11:53.317 fused_ordering(896) 00:11:53.317 fused_ordering(897) 00:11:53.317 fused_ordering(898) 00:11:53.317 fused_ordering(899) 00:11:53.317 fused_ordering(900) 00:11:53.317 fused_ordering(901) 00:11:53.317 fused_ordering(902) 00:11:53.317 fused_ordering(903) 00:11:53.317 fused_ordering(904) 00:11:53.317 fused_ordering(905) 00:11:53.317 fused_ordering(906) 00:11:53.317 fused_ordering(907) 00:11:53.317 fused_ordering(908) 00:11:53.317 fused_ordering(909) 00:11:53.317 fused_ordering(910) 00:11:53.317 fused_ordering(911) 00:11:53.317 fused_ordering(912) 00:11:53.317 fused_ordering(913) 00:11:53.317 fused_ordering(914) 00:11:53.317 fused_ordering(915) 00:11:53.317 fused_ordering(916) 00:11:53.317 fused_ordering(917) 00:11:53.317 fused_ordering(918) 00:11:53.317 fused_ordering(919) 00:11:53.317 fused_ordering(920) 00:11:53.317 fused_ordering(921) 00:11:53.317 fused_ordering(922) 00:11:53.317 fused_ordering(923) 00:11:53.317 fused_ordering(924) 00:11:53.317 fused_ordering(925) 00:11:53.317 fused_ordering(926) 00:11:53.317 fused_ordering(927) 00:11:53.317 fused_ordering(928) 00:11:53.317 fused_ordering(929) 00:11:53.317 fused_ordering(930) 00:11:53.317 fused_ordering(931) 00:11:53.317 fused_ordering(932) 00:11:53.317 fused_ordering(933) 00:11:53.317 fused_ordering(934) 00:11:53.317 fused_ordering(935) 00:11:53.317 fused_ordering(936) 00:11:53.317 fused_ordering(937) 00:11:53.317 fused_ordering(938) 00:11:53.317 fused_ordering(939) 00:11:53.317 fused_ordering(940) 00:11:53.317 fused_ordering(941) 00:11:53.317 fused_ordering(942) 00:11:53.317 fused_ordering(943) 00:11:53.317 fused_ordering(944) 00:11:53.317 fused_ordering(945) 00:11:53.317 fused_ordering(946) 00:11:53.317 fused_ordering(947) 00:11:53.317 fused_ordering(948) 00:11:53.317 fused_ordering(949) 00:11:53.317 fused_ordering(950) 00:11:53.317 fused_ordering(951) 00:11:53.317 fused_ordering(952) 00:11:53.317 fused_ordering(953) 00:11:53.317 fused_ordering(954) 00:11:53.317 fused_ordering(955) 00:11:53.317 fused_ordering(956) 00:11:53.317 fused_ordering(957) 00:11:53.317 fused_ordering(958) 00:11:53.317 fused_ordering(959) 00:11:53.317 fused_ordering(960) 00:11:53.317 fused_ordering(961) 00:11:53.317 fused_ordering(962) 00:11:53.317 fused_ordering(963) 00:11:53.317 fused_ordering(964) 00:11:53.317 fused_ordering(965) 00:11:53.317 fused_ordering(966) 00:11:53.317 fused_ordering(967) 00:11:53.317 fused_ordering(968) 00:11:53.317 fused_ordering(969) 00:11:53.317 fused_ordering(970) 00:11:53.317 fused_ordering(971) 00:11:53.317 fused_ordering(972) 00:11:53.317 fused_ordering(973) 00:11:53.317 fused_ordering(974) 00:11:53.317 fused_ordering(975) 00:11:53.317 fused_ordering(976) 00:11:53.317 fused_ordering(977) 00:11:53.317 fused_ordering(978) 00:11:53.317 fused_ordering(979) 00:11:53.317 fused_ordering(980) 00:11:53.317 fused_ordering(981) 00:11:53.317 fused_ordering(982) 00:11:53.317 fused_ordering(983) 00:11:53.317 fused_ordering(984) 00:11:53.317 fused_ordering(985) 00:11:53.317 fused_ordering(986) 00:11:53.317 fused_ordering(987) 00:11:53.317 fused_ordering(988) 00:11:53.317 fused_ordering(989) 00:11:53.317 fused_ordering(990) 00:11:53.317 fused_ordering(991) 00:11:53.317 fused_ordering(992) 00:11:53.317 fused_ordering(993) 00:11:53.317 fused_ordering(994) 00:11:53.317 fused_ordering(995) 00:11:53.317 fused_ordering(996) 00:11:53.317 fused_ordering(997) 00:11:53.317 fused_ordering(998) 00:11:53.317 fused_ordering(999) 00:11:53.317 fused_ordering(1000) 00:11:53.317 fused_ordering(1001) 00:11:53.317 fused_ordering(1002) 00:11:53.317 fused_ordering(1003) 00:11:53.317 fused_ordering(1004) 00:11:53.317 fused_ordering(1005) 00:11:53.317 fused_ordering(1006) 00:11:53.317 fused_ordering(1007) 00:11:53.317 fused_ordering(1008) 00:11:53.317 fused_ordering(1009) 00:11:53.317 fused_ordering(1010) 00:11:53.317 fused_ordering(1011) 00:11:53.317 fused_ordering(1012) 00:11:53.317 fused_ordering(1013) 00:11:53.317 fused_ordering(1014) 00:11:53.317 fused_ordering(1015) 00:11:53.317 fused_ordering(1016) 00:11:53.317 fused_ordering(1017) 00:11:53.317 fused_ordering(1018) 00:11:53.317 fused_ordering(1019) 00:11:53.317 fused_ordering(1020) 00:11:53.317 fused_ordering(1021) 00:11:53.317 fused_ordering(1022) 00:11:53.317 fused_ordering(1023) 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:53.317 rmmod nvme_rdma 00:11:53.317 rmmod nvme_fabrics 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1726724 ']' 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1726724 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1726724 ']' 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1726724 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1726724 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1726724' 00:11:53.317 killing process with pid 1726724 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1726724 00:11:53.317 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1726724 00:11:53.578 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:53.578 14:54:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:53.578 00:11:53.578 real 0m10.536s 00:11:53.578 user 0m5.527s 00:11:53.578 sys 0m6.410s 00:11:53.578 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.578 14:54:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.578 ************************************ 00:11:53.578 END TEST nvmf_fused_ordering 00:11:53.578 ************************************ 00:11:53.578 14:54:09 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:53.578 14:54:09 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:53.578 14:54:09 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:53.578 14:54:09 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.578 14:54:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:53.578 ************************************ 00:11:53.578 START TEST nvmf_delete_subsystem 00:11:53.578 ************************************ 00:11:53.578 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:53.838 * Looking for test storage... 00:11:53.838 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.838 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:53.839 14:54:09 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.981 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:01.982 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:01.982 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:01.982 Found net devices under 0000:98:00.0: mlx_0_0 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:01.982 Found net devices under 0000:98:00.1: mlx_0_1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:01.982 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.982 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:01.982 altname enp152s0f0np0 00:12:01.982 altname ens817f0np0 00:12:01.982 inet 192.168.100.8/24 scope global mlx_0_0 00:12:01.982 valid_lft forever preferred_lft forever 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:01.982 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.982 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:01.982 altname enp152s0f1np1 00:12:01.982 altname ens817f1np1 00:12:01.982 inet 192.168.100.9/24 scope global mlx_0_1 00:12:01.982 valid_lft forever preferred_lft forever 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:01.982 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:01.983 192.168.100.9' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:01.983 192.168.100.9' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:01.983 192.168.100.9' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1731847 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1731847 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1731847 ']' 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.983 14:54:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.983 [2024-07-15 14:54:17.729344] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:01.983 [2024-07-15 14:54:17.729415] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.983 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.983 [2024-07-15 14:54:17.800655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:01.983 [2024-07-15 14:54:17.874405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.983 [2024-07-15 14:54:17.874445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.983 [2024-07-15 14:54:17.874453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.983 [2024-07-15 14:54:17.874460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.983 [2024-07-15 14:54:17.874465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.983 [2024-07-15 14:54:17.874616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.983 [2024-07-15 14:54:17.874617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.553 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.553 [2024-07-15 14:54:18.563245] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b7bb70/0x1b80060) succeed. 00:12:02.553 [2024-07-15 14:54:18.576533] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b7d070/0x1bc16f0) succeed. 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 [2024-07-15 14:54:18.661561] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 NULL1 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 Delay0 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1731894 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:02.814 14:54:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:02.814 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.814 [2024-07-15 14:54:18.770487] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:04.727 14:54:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.727 14:54:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.727 14:54:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.108 NVMe io qpair process completion error 00:12:06.108 NVMe io qpair process completion error 00:12:06.108 NVMe io qpair process completion error 00:12:06.108 NVMe io qpair process completion error 00:12:06.108 NVMe io qpair process completion error 00:12:06.108 NVMe io qpair process completion error 00:12:06.108 14:54:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.108 14:54:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:06.108 14:54:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1731894 00:12:06.108 14:54:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:06.369 14:54:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:06.369 14:54:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1731894 00:12:06.369 14:54:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 starting I/O failed: -6 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Read completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.966 Write completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 starting I/O failed: -6 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Write completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.967 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Write completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Read completed with error (sct=0, sc=8) 00:12:06.968 Initializing NVMe Controllers 00:12:06.968 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:06.968 Controller IO queue size 128, less than required. 00:12:06.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:06.968 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:06.968 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:06.968 Initialization complete. Launching workers. 00:12:06.968 ======================================================== 00:12:06.968 Latency(us) 00:12:06.968 Device Information : IOPS MiB/s Average min max 00:12:06.968 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.67 0.04 1591023.90 1000087.61 2967515.80 00:12:06.968 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.67 0.04 1592401.60 1000830.08 2968542.96 00:12:06.968 ======================================================== 00:12:06.968 Total : 161.34 0.08 1591712.75 1000087.61 2968542.96 00:12:06.968 00:12:06.968 14:54:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:06.968 14:54:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1731894 00:12:06.968 14:54:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:06.968 [2024-07-15 14:54:22.878468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:06.968 [2024-07-15 14:54:22.878500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:06.968 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1731894 00:12:07.535 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1731894) - No such process 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1731894 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1731894 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1731894 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.535 [2024-07-15 14:54:23.398964] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1732903 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:07.535 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:07.535 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.535 [2024-07-15 14:54:23.493746] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:08.104 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:08.105 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:08.105 14:54:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:08.365 14:54:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:08.624 14:54:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:08.624 14:54:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:08.883 14:54:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:08.883 14:54:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:08.883 14:54:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:09.454 14:54:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:09.454 14:54:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:09.454 14:54:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:10.023 14:54:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:10.023 14:54:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:10.023 14:54:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:10.627 14:54:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:10.627 14:54:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:10.627 14:54:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:10.917 14:54:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:10.917 14:54:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:10.917 14:54:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:11.488 14:54:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:11.488 14:54:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:11.488 14:54:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:12.061 14:54:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:12.061 14:54:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:12.061 14:54:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:12.633 14:54:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:12.633 14:54:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:12.633 14:54:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.205 14:54:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.205 14:54:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:13.205 14:54:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.466 14:54:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.466 14:54:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:13.466 14:54:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.037 14:54:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:14.037 14:54:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:14.037 14:54:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.609 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:14.609 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:14.609 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.609 Initializing NVMe Controllers 00:12:14.609 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.609 Controller IO queue size 128, less than required. 00:12:14.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:14.609 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:14.609 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:14.609 Initialization complete. Launching workers. 00:12:14.609 ======================================================== 00:12:14.609 Latency(us) 00:12:14.609 Device Information : IOPS MiB/s Average min max 00:12:14.609 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001039.77 1000037.81 1003201.19 00:12:14.609 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001718.39 1000049.40 1005421.90 00:12:14.609 ======================================================== 00:12:14.609 Total : 256.00 0.12 1001379.08 1000037.81 1005421.90 00:12:14.609 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1732903 00:12:15.181 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1732903) - No such process 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1732903 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.181 14:54:30 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:15.181 rmmod nvme_rdma 00:12:15.181 rmmod nvme_fabrics 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1731847 ']' 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1731847 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1731847 ']' 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1731847 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1731847 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1731847' 00:12:15.181 killing process with pid 1731847 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1731847 00:12:15.181 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1731847 00:12:15.442 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.442 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:15.442 00:12:15.442 real 0m21.664s 00:12:15.442 user 0m50.414s 00:12:15.442 sys 0m6.862s 00:12:15.442 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.442 14:54:31 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 ************************************ 00:12:15.442 END TEST nvmf_delete_subsystem 00:12:15.442 ************************************ 00:12:15.442 14:54:31 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:15.442 14:54:31 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:12:15.442 14:54:31 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:15.442 14:54:31 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.442 14:54:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 ************************************ 00:12:15.442 START TEST nvmf_ns_masking 00:12:15.442 ************************************ 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:12:15.442 * Looking for test storage... 00:12:15.442 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.442 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e6e8410d-05cc-4897-9744-956ecc08f1cf 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a88675a9-a420-4198-ba2c-b8f2a485dafa 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=27ac467a-4251-4cfb-b7b6-bdd18b579494 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.443 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:15.704 14:54:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:23.849 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:23.849 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:23.849 Found net devices under 0000:98:00.0: mlx_0_0 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:23.849 Found net devices under 0000:98:00.1: mlx_0_1 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:23.849 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:23.849 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:23.849 altname enp152s0f0np0 00:12:23.849 altname ens817f0np0 00:12:23.849 inet 192.168.100.8/24 scope global mlx_0_0 00:12:23.849 valid_lft forever preferred_lft forever 00:12:23.849 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:23.850 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:23.850 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:23.850 altname enp152s0f1np1 00:12:23.850 altname ens817f1np1 00:12:23.850 inet 192.168.100.9/24 scope global mlx_0_1 00:12:23.850 valid_lft forever preferred_lft forever 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:23.850 192.168.100.9' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:23.850 192.168.100.9' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:23.850 192.168.100.9' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1738612 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1738612 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1738612 ']' 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.850 14:54:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.850 [2024-07-15 14:54:39.465172] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:23.850 [2024-07-15 14:54:39.465255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.850 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.850 [2024-07-15 14:54:39.541071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.850 [2024-07-15 14:54:39.614456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.850 [2024-07-15 14:54:39.614498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.850 [2024-07-15 14:54:39.614506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.850 [2024-07-15 14:54:39.614513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.850 [2024-07-15 14:54:39.614519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.850 [2024-07-15 14:54:39.614541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.421 14:54:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.421 14:54:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:24.421 14:54:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.421 14:54:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:24.421 14:54:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.421 14:54:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.421 14:54:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:24.421 [2024-07-15 14:54:40.467750] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb0bf90/0xb10480) succeed. 00:12:24.421 [2024-07-15 14:54:40.480989] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb0d490/0xb51b10) succeed. 00:12:24.681 14:54:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:24.681 14:54:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:24.681 14:54:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:24.681 Malloc1 00:12:24.681 14:54:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.941 Malloc2 00:12:24.941 14:54:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:25.203 14:54:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:25.203 14:54:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:25.464 [2024-07-15 14:54:41.366528] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:25.464 14:54:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:25.464 14:54:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27ac467a-4251-4cfb-b7b6-bdd18b579494 -a 192.168.100.8 -s 4420 -i 4 00:12:26.037 14:54:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.037 14:54:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:26.037 14:54:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.037 14:54:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:26.037 14:54:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:27.952 [ 0]:0x1 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a3b03d7aebe4e13b5156fd2829f4290 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a3b03d7aebe4e13b5156fd2829f4290 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.952 14:54:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:28.212 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:28.212 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.212 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:28.212 [ 0]:0x1 00:12:28.212 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:28.212 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a3b03d7aebe4e13b5156fd2829f4290 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a3b03d7aebe4e13b5156fd2829f4290 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:28.213 [ 1]:0x2 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d1ba618bc0ad48c6b0223eac00f31dd9 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d1ba618bc0ad48c6b0223eac00f31dd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:28.213 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.784 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.044 14:54:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:29.044 14:54:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:29.044 14:54:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27ac467a-4251-4cfb-b7b6-bdd18b579494 -a 192.168.100.8 -s 4420 -i 4 00:12:29.613 14:54:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:29.613 14:54:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.613 14:54:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.613 14:54:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:29.613 14:54:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:29.613 14:54:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.526 [ 0]:0x2 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d1ba618bc0ad48c6b0223eac00f31dd9 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d1ba618bc0ad48c6b0223eac00f31dd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.526 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.786 [ 0]:0x1 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a3b03d7aebe4e13b5156fd2829f4290 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a3b03d7aebe4e13b5156fd2829f4290 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.786 [ 1]:0x2 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d1ba618bc0ad48c6b0223eac00f31dd9 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d1ba618bc0ad48c6b0223eac00f31dd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.786 14:54:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:32.046 [ 0]:0x2 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:32.046 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:32.305 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d1ba618bc0ad48c6b0223eac00f31dd9 00:12:32.305 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d1ba618bc0ad48c6b0223eac00f31dd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:32.305 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:32.305 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.565 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:32.826 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:32.826 14:54:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27ac467a-4251-4cfb-b7b6-bdd18b579494 -a 192.168.100.8 -s 4420 -i 4 00:12:33.086 14:54:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:33.086 14:54:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.086 14:54:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.086 14:54:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:33.086 14:54:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:33.086 14:54:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.632 [ 0]:0x1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a3b03d7aebe4e13b5156fd2829f4290 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a3b03d7aebe4e13b5156fd2829f4290 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.632 [ 1]:0x2 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d1ba618bc0ad48c6b0223eac00f31dd9 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d1ba618bc0ad48c6b0223eac00f31dd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.632 [ 0]:0x2 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d1ba618bc0ad48c6b0223eac00f31dd9 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d1ba618bc0ad48c6b0223eac00f31dd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:35.632 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:35.632 [2024-07-15 14:54:51.691082] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:35.893 request: 00:12:35.893 { 00:12:35.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.893 "nsid": 2, 00:12:35.893 "host": "nqn.2016-06.io.spdk:host1", 00:12:35.893 "method": "nvmf_ns_remove_host", 00:12:35.893 "req_id": 1 00:12:35.893 } 00:12:35.893 Got JSON-RPC error response 00:12:35.893 response: 00:12:35.893 { 00:12:35.893 "code": -32602, 00:12:35.893 "message": "Invalid parameters" 00:12:35.893 } 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.893 [ 0]:0x2 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d1ba618bc0ad48c6b0223eac00f31dd9 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d1ba618bc0ad48c6b0223eac00f31dd9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:35.893 14:54:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1741440 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1741440 /var/tmp/host.sock 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1741440 ']' 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:36.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.154 14:54:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.415 [2024-07-15 14:54:52.254367] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:36.415 [2024-07-15 14:54:52.254417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741440 ] 00:12:36.415 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.415 [2024-07-15 14:54:52.339369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.415 [2024-07-15 14:54:52.403388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.985 14:54:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.985 14:54:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:36.985 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.246 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:37.506 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e6e8410d-05cc-4897-9744-956ecc08f1cf 00:12:37.506 14:54:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:37.506 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E6E8410D05CC48979744956ECC08F1CF -i 00:12:37.506 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a88675a9-a420-4198-ba2c-b8f2a485dafa 00:12:37.506 14:54:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:37.506 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A88675A9A4204198BA2CB8F2A485DAFA -i 00:12:37.765 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:37.765 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:38.026 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:38.026 14:54:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:38.286 nvme0n1 00:12:38.286 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:38.286 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:38.546 nvme1n2 00:12:38.546 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:38.546 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:38.546 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:38.546 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:38.546 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e6e8410d-05cc-4897-9744-956ecc08f1cf == \e\6\e\8\4\1\0\d\-\0\5\c\c\-\4\8\9\7\-\9\7\4\4\-\9\5\6\e\c\c\0\8\f\1\c\f ]] 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:38.806 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:39.067 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a88675a9-a420-4198-ba2c-b8f2a485dafa == \a\8\8\6\7\5\a\9\-\a\4\2\0\-\4\1\9\8\-\b\a\2\c\-\b\8\f\2\a\4\8\5\d\a\f\a ]] 00:12:39.067 14:54:54 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1741440 00:12:39.068 14:54:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1741440 ']' 00:12:39.068 14:54:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1741440 00:12:39.068 14:54:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:39.068 14:54:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.068 14:54:54 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1741440 00:12:39.068 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:39.068 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:39.068 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1741440' 00:12:39.068 killing process with pid 1741440 00:12:39.068 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1741440 00:12:39.068 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1741440 00:12:39.329 14:54:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:39.590 rmmod nvme_rdma 00:12:39.590 rmmod nvme_fabrics 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1738612 ']' 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1738612 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1738612 ']' 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1738612 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1738612 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1738612' 00:12:39.590 killing process with pid 1738612 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1738612 00:12:39.590 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1738612 00:12:39.851 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.851 14:54:55 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:39.851 00:12:39.851 real 0m24.370s 00:12:39.851 user 0m26.013s 00:12:39.851 sys 0m7.602s 00:12:39.851 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.851 14:54:55 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.851 ************************************ 00:12:39.851 END TEST nvmf_ns_masking 00:12:39.851 ************************************ 00:12:39.851 14:54:55 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:39.851 14:54:55 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:39.851 14:54:55 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:39.851 14:54:55 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.851 14:54:55 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.851 14:54:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:39.851 ************************************ 00:12:39.851 START TEST nvmf_nvme_cli 00:12:39.851 ************************************ 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:39.851 * Looking for test storage... 00:12:39.851 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.851 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:40.112 14:54:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:48.253 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:48.253 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:48.253 Found net devices under 0000:98:00.0: mlx_0_0 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:48.253 Found net devices under 0000:98:00.1: mlx_0_1 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:48.253 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:48.254 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:48.254 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:48.254 altname enp152s0f0np0 00:12:48.254 altname ens817f0np0 00:12:48.254 inet 192.168.100.8/24 scope global mlx_0_0 00:12:48.254 valid_lft forever preferred_lft forever 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:48.254 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:48.254 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:48.254 altname enp152s0f1np1 00:12:48.254 altname ens817f1np1 00:12:48.254 inet 192.168.100.9/24 scope global mlx_0_1 00:12:48.254 valid_lft forever preferred_lft forever 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:48.254 192.168.100.9' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:48.254 192.168.100.9' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:48.254 192.168.100.9' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1746297 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1746297 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1746297 ']' 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.254 14:55:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:48.254 [2024-07-15 14:55:03.906459] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:48.254 [2024-07-15 14:55:03.906527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.254 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.254 [2024-07-15 14:55:03.982890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.254 [2024-07-15 14:55:04.059484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.254 [2024-07-15 14:55:04.059526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.254 [2024-07-15 14:55:04.059533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.254 [2024-07-15 14:55:04.059540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.254 [2024-07-15 14:55:04.059545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.254 [2024-07-15 14:55:04.059728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.254 [2024-07-15 14:55:04.059849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.254 [2024-07-15 14:55:04.060004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.254 [2024-07-15 14:55:04.060005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.823 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:48.823 [2024-07-15 14:55:04.770824] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x144d200/0x14516f0) succeed. 00:12:48.823 [2024-07-15 14:55:04.785454] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x144e840/0x1492d80) succeed. 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.083 Malloc0 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.083 Malloc1 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.083 [2024-07-15 14:55:04.993820] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.083 14:55:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:12:49.083 00:12:49.083 Discovery Log Number of Records 2, Generation counter 2 00:12:49.083 =====Discovery Log Entry 0====== 00:12:49.083 trtype: rdma 00:12:49.083 adrfam: ipv4 00:12:49.083 subtype: current discovery subsystem 00:12:49.083 treq: not required 00:12:49.083 portid: 0 00:12:49.083 trsvcid: 4420 00:12:49.083 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:49.083 traddr: 192.168.100.8 00:12:49.083 eflags: explicit discovery connections, duplicate discovery information 00:12:49.083 rdma_prtype: not specified 00:12:49.083 rdma_qptype: connected 00:12:49.083 rdma_cms: rdma-cm 00:12:49.083 rdma_pkey: 0x0000 00:12:49.083 =====Discovery Log Entry 1====== 00:12:49.083 trtype: rdma 00:12:49.083 adrfam: ipv4 00:12:49.083 subtype: nvme subsystem 00:12:49.083 treq: not required 00:12:49.083 portid: 0 00:12:49.083 trsvcid: 4420 00:12:49.083 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:49.083 traddr: 192.168.100.8 00:12:49.083 eflags: none 00:12:49.083 rdma_prtype: not specified 00:12:49.083 rdma_qptype: connected 00:12:49.083 rdma_cms: rdma-cm 00:12:49.083 rdma_pkey: 0x0000 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:49.083 14:55:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:50.993 14:55:06 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:50.993 14:55:06 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.993 14:55:06 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.993 14:55:06 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:50.993 14:55:06 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:50.993 14:55:06 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:52.902 /dev/nvme0n1 ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:52.902 14:55:08 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.843 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:53.843 rmmod nvme_rdma 00:12:54.103 rmmod nvme_fabrics 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1746297 ']' 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1746297 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1746297 ']' 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1746297 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1746297 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1746297' 00:12:54.103 killing process with pid 1746297 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1746297 00:12:54.103 14:55:09 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1746297 00:12:54.363 14:55:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.363 14:55:10 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:54.363 00:12:54.363 real 0m14.428s 00:12:54.363 user 0m26.858s 00:12:54.363 sys 0m6.413s 00:12:54.363 14:55:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.363 14:55:10 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:54.363 ************************************ 00:12:54.363 END TEST nvmf_nvme_cli 00:12:54.363 ************************************ 00:12:54.363 14:55:10 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:54.363 14:55:10 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:12:54.363 14:55:10 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:54.363 14:55:10 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.363 14:55:10 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.363 14:55:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:54.363 ************************************ 00:12:54.363 START TEST nvmf_host_management 00:12:54.363 ************************************ 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:54.363 * Looking for test storage... 00:12:54.363 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.363 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.677 14:55:10 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.678 14:55:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:02.844 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:02.844 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:02.844 Found net devices under 0000:98:00.0: mlx_0_0 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:02.844 Found net devices under 0000:98:00.1: mlx_0_1 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:02.844 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:02.845 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:02.845 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:02.845 altname enp152s0f0np0 00:13:02.845 altname ens817f0np0 00:13:02.845 inet 192.168.100.8/24 scope global mlx_0_0 00:13:02.845 valid_lft forever preferred_lft forever 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:02.845 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:02.845 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:02.845 altname enp152s0f1np1 00:13:02.845 altname ens817f1np1 00:13:02.845 inet 192.168.100.9/24 scope global mlx_0_1 00:13:02.845 valid_lft forever preferred_lft forever 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:02.845 192.168.100.9' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:02.845 192.168.100.9' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:02.845 192.168.100.9' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1751881 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1751881 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1751881 ']' 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.845 14:55:18 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.845 [2024-07-15 14:55:18.529062] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:02.845 [2024-07-15 14:55:18.529135] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.845 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.845 [2024-07-15 14:55:18.617589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.845 [2024-07-15 14:55:18.712453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.845 [2024-07-15 14:55:18.712517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.845 [2024-07-15 14:55:18.712525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.845 [2024-07-15 14:55:18.712533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.845 [2024-07-15 14:55:18.712540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.845 [2024-07-15 14:55:18.712681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.845 [2024-07-15 14:55:18.712846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.845 [2024-07-15 14:55:18.713013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.845 [2024-07-15 14:55:18.713012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.418 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.418 [2024-07-15 14:55:19.390094] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x204f6b0/0x2053ba0) succeed. 00:13:03.418 [2024-07-15 14:55:19.403623] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2050cf0/0x2095230) succeed. 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.679 Malloc0 00:13:03.679 [2024-07-15 14:55:19.582425] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1752145 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1752145 /var/tmp/bdevperf.sock 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1752145 ']' 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:03.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:03.679 { 00:13:03.679 "params": { 00:13:03.679 "name": "Nvme$subsystem", 00:13:03.679 "trtype": "$TEST_TRANSPORT", 00:13:03.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.679 "adrfam": "ipv4", 00:13:03.679 "trsvcid": "$NVMF_PORT", 00:13:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.679 "hdgst": ${hdgst:-false}, 00:13:03.679 "ddgst": ${ddgst:-false} 00:13:03.679 }, 00:13:03.679 "method": "bdev_nvme_attach_controller" 00:13:03.679 } 00:13:03.679 EOF 00:13:03.679 )") 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:03.679 14:55:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:03.679 "params": { 00:13:03.679 "name": "Nvme0", 00:13:03.679 "trtype": "rdma", 00:13:03.679 "traddr": "192.168.100.8", 00:13:03.679 "adrfam": "ipv4", 00:13:03.679 "trsvcid": "4420", 00:13:03.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:03.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:03.679 "hdgst": false, 00:13:03.679 "ddgst": false 00:13:03.679 }, 00:13:03.679 "method": "bdev_nvme_attach_controller" 00:13:03.679 }' 00:13:03.679 [2024-07-15 14:55:19.683601] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:03.679 [2024-07-15 14:55:19.683654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752145 ] 00:13:03.679 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.940 [2024-07-15 14:55:19.749740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.940 [2024-07-15 14:55:19.814662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.940 Running I/O for 10 seconds... 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1200 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1200 -ge 100 ']' 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.513 14:55:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:05.902 [2024-07-15 14:55:21.564145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:13:05.902 [2024-07-15 14:55:21.564182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:13:05.902 [2024-07-15 14:55:21.564208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:13:05.902 [2024-07-15 14:55:21.564399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.902 [2024-07-15 14:55:21.564409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:13:05.903 [2024-07-15 14:55:21.564416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:13:05.903 [2024-07-15 14:55:21.564433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:13:05.903 [2024-07-15 14:55:21.564449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:13:05.903 [2024-07-15 14:55:21.564466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:13:05.903 [2024-07-15 14:55:21.564700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:13:05.903 [2024-07-15 14:55:21.564948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:13:05.903 [2024-07-15 14:55:21.564964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:13:05.903 [2024-07-15 14:55:21.564981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.564990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x182400 00:13:05.903 [2024-07-15 14:55:21.564997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.565006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x182400 00:13:05.903 [2024-07-15 14:55:21.565013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.565022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x182400 00:13:05.903 [2024-07-15 14:55:21.565029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.903 [2024-07-15 14:55:21.565038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134a6000 len:0x10000 key:0x182400 00:13:05.903 [2024-07-15 14:55:21.565045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134c7000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134e8000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001356c000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 [2024-07-15 14:55:21.565241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x182400 00:13:05.904 [2024-07-15 14:55:21.565248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e20d2000 sqhd:52b0 p:0 m:0 dnr:0 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1752145 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:05.904 { 00:13:05.904 "params": { 00:13:05.904 "name": "Nvme$subsystem", 00:13:05.904 "trtype": "$TEST_TRANSPORT", 00:13:05.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:05.904 "adrfam": "ipv4", 00:13:05.904 "trsvcid": "$NVMF_PORT", 00:13:05.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:05.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:05.904 "hdgst": ${hdgst:-false}, 00:13:05.904 "ddgst": ${ddgst:-false} 00:13:05.904 }, 00:13:05.904 "method": "bdev_nvme_attach_controller" 00:13:05.904 } 00:13:05.904 EOF 00:13:05.904 )") 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:05.904 14:55:21 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:05.904 "params": { 00:13:05.904 "name": "Nvme0", 00:13:05.904 "trtype": "rdma", 00:13:05.904 "traddr": "192.168.100.8", 00:13:05.904 "adrfam": "ipv4", 00:13:05.904 "trsvcid": "4420", 00:13:05.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:05.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:05.904 "hdgst": false, 00:13:05.904 "ddgst": false 00:13:05.904 }, 00:13:05.904 "method": "bdev_nvme_attach_controller" 00:13:05.904 }' 00:13:05.904 [2024-07-15 14:55:21.618387] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:05.904 [2024-07-15 14:55:21.618437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752602 ] 00:13:05.904 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.904 [2024-07-15 14:55:21.684981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.904 [2024-07-15 14:55:21.750271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.904 Running I/O for 1 seconds... 00:13:07.287 00:13:07.287 Latency(us) 00:13:07.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.287 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:07.287 Verification LBA range: start 0x0 length 0x400 00:13:07.287 Nvme0n1 : 1.01 2541.73 158.86 0.00 0.00 24608.26 1024.00 47404.37 00:13:07.287 =================================================================================================================== 00:13:07.287 Total : 2541.73 158.86 0.00 0.00 24608.26 1024.00 47404.37 00:13:07.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1752145 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:07.287 rmmod nvme_rdma 00:13:07.287 rmmod nvme_fabrics 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1751881 ']' 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1751881 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1751881 ']' 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1751881 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751881 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751881' 00:13:07.287 killing process with pid 1751881 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1751881 00:13:07.287 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1751881 00:13:07.548 [2024-07-15 14:55:23.388906] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:07.548 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.548 14:55:23 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:07.548 14:55:23 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:07.548 00:13:07.548 real 0m13.092s 00:13:07.548 user 0m24.481s 00:13:07.548 sys 0m6.792s 00:13:07.548 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.548 14:55:23 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:07.548 ************************************ 00:13:07.548 END TEST nvmf_host_management 00:13:07.548 ************************************ 00:13:07.548 14:55:23 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:07.548 14:55:23 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:07.548 14:55:23 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:07.548 14:55:23 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.548 14:55:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:07.548 ************************************ 00:13:07.548 START TEST nvmf_lvol 00:13:07.548 ************************************ 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:07.548 * Looking for test storage... 00:13:07.548 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.548 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.808 14:55:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:15.946 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:15.946 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:15.946 Found net devices under 0000:98:00.0: mlx_0_0 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:15.946 Found net devices under 0000:98:00.1: mlx_0_1 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:15.946 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:15.946 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:15.946 altname enp152s0f0np0 00:13:15.946 altname ens817f0np0 00:13:15.946 inet 192.168.100.8/24 scope global mlx_0_0 00:13:15.946 valid_lft forever preferred_lft forever 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:15.946 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:15.946 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:15.946 altname enp152s0f1np1 00:13:15.946 altname ens817f1np1 00:13:15.946 inet 192.168.100.9/24 scope global mlx_0_1 00:13:15.946 valid_lft forever preferred_lft forever 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:15.946 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:15.947 192.168.100.9' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:15.947 192.168.100.9' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:15.947 192.168.100.9' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1757047 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1757047 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1757047 ']' 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.947 14:55:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:15.947 [2024-07-15 14:55:31.556580] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:15.947 [2024-07-15 14:55:31.556650] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.947 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.947 [2024-07-15 14:55:31.630643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.947 [2024-07-15 14:55:31.705282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.947 [2024-07-15 14:55:31.705321] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.947 [2024-07-15 14:55:31.705329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.947 [2024-07-15 14:55:31.705335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.947 [2024-07-15 14:55:31.705340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.947 [2024-07-15 14:55:31.705476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.947 [2024-07-15 14:55:31.705596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.947 [2024-07-15 14:55:31.705599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.518 14:55:32 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.518 14:55:32 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:16.518 14:55:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.518 14:55:32 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:16.518 14:55:32 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:16.518 14:55:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.518 14:55:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:16.518 [2024-07-15 14:55:32.556420] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ea2720/0x1ea6c10) succeed. 00:13:16.518 [2024-07-15 14:55:32.570515] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ea3cc0/0x1ee82a0) succeed. 00:13:16.777 14:55:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.036 14:55:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:17.036 14:55:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.036 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:17.036 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:17.295 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:17.554 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b0a32b61-d97d-4306-a68b-4eef037decf2 00:13:17.554 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0a32b61-d97d-4306-a68b-4eef037decf2 lvol 20 00:13:17.554 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1f53e7c5-59fd-426d-bd04-7abd8f91f103 00:13:17.554 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:17.815 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f53e7c5-59fd-426d-bd04-7abd8f91f103 00:13:17.815 14:55:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:18.075 [2024-07-15 14:55:34.013930] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:18.075 14:55:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:18.335 14:55:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1757671 00:13:18.335 14:55:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:18.335 14:55:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:18.335 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.275 14:55:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1f53e7c5-59fd-426d-bd04-7abd8f91f103 MY_SNAPSHOT 00:13:19.535 14:55:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5eb8d7e3-00a5-4d53-b868-9850327662ab 00:13:19.535 14:55:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1f53e7c5-59fd-426d-bd04-7abd8f91f103 30 00:13:19.535 14:55:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5eb8d7e3-00a5-4d53-b868-9850327662ab MY_CLONE 00:13:19.795 14:55:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=dbd32087-d1a2-4a3e-b347-8f25b0d5149f 00:13:19.795 14:55:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dbd32087-d1a2-4a3e-b347-8f25b0d5149f 00:13:20.055 14:55:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1757671 00:13:30.061 Initializing NVMe Controllers 00:13:30.061 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:13:30.061 Controller IO queue size 128, less than required. 00:13:30.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:30.061 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:30.061 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:30.061 Initialization complete. Launching workers. 00:13:30.061 ======================================================== 00:13:30.061 Latency(us) 00:13:30.061 Device Information : IOPS MiB/s Average min max 00:13:30.061 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 22823.60 89.15 5609.38 2292.14 29216.70 00:13:30.061 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 23064.10 90.09 5550.26 3006.10 28825.64 00:13:30.061 ======================================================== 00:13:30.061 Total : 45887.69 179.25 5579.67 2292.14 29216.70 00:13:30.061 00:13:30.061 14:55:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:30.061 14:55:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1f53e7c5-59fd-426d-bd04-7abd8f91f103 00:13:30.061 14:55:45 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0a32b61-d97d-4306-a68b-4eef037decf2 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:30.061 rmmod nvme_rdma 00:13:30.061 rmmod nvme_fabrics 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1757047 ']' 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1757047 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1757047 ']' 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1757047 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.061 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1757047 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1757047' 00:13:30.320 killing process with pid 1757047 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1757047 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1757047 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:30.320 00:13:30.320 real 0m22.886s 00:13:30.320 user 1m10.574s 00:13:30.320 sys 0m6.827s 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.320 14:55:46 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:30.320 ************************************ 00:13:30.320 END TEST nvmf_lvol 00:13:30.320 ************************************ 00:13:30.581 14:55:46 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:30.581 14:55:46 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:30.581 14:55:46 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.581 14:55:46 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.581 14:55:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:30.581 ************************************ 00:13:30.581 START TEST nvmf_lvs_grow 00:13:30.581 ************************************ 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:30.581 * Looking for test storage... 00:13:30.581 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.581 14:55:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:38.718 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:38.719 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:38.719 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:38.719 Found net devices under 0000:98:00.0: mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:38.719 Found net devices under 0000:98:00.1: mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:38.719 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:38.719 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:38.719 altname enp152s0f0np0 00:13:38.719 altname ens817f0np0 00:13:38.719 inet 192.168.100.8/24 scope global mlx_0_0 00:13:38.719 valid_lft forever preferred_lft forever 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:38.719 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:38.719 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:38.719 altname enp152s0f1np1 00:13:38.719 altname ens817f1np1 00:13:38.719 inet 192.168.100.9/24 scope global mlx_0_1 00:13:38.719 valid_lft forever preferred_lft forever 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:38.719 192.168.100.9' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:38.719 192.168.100.9' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:38.719 192.168.100.9' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1764352 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1764352 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1764352 ']' 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.719 14:55:54 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:38.719 [2024-07-15 14:55:54.503295] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:38.719 [2024-07-15 14:55:54.503349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.719 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.719 [2024-07-15 14:55:54.569857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.720 [2024-07-15 14:55:54.633385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.720 [2024-07-15 14:55:54.633428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.720 [2024-07-15 14:55:54.633436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.720 [2024-07-15 14:55:54.633442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.720 [2024-07-15 14:55:54.633447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.720 [2024-07-15 14:55:54.633468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.289 14:55:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.289 14:55:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:39.289 14:55:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.289 14:55:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.289 14:55:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:39.289 14:55:55 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.289 14:55:55 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:39.550 [2024-07-15 14:55:55.492532] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa97f90/0xa9c480) succeed. 00:13:39.550 [2024-07-15 14:55:55.507318] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa99490/0xaddb10) succeed. 00:13:39.550 14:55:55 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:39.550 14:55:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:39.550 14:55:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.550 14:55:55 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:39.811 ************************************ 00:13:39.811 START TEST lvs_grow_clean 00:13:39.811 ************************************ 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:39.811 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:40.072 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:40.072 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:40.072 14:55:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:40.332 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:40.332 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:40.332 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 lvol 150 00:13:40.332 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f9af7f19-32c5-4419-b156-3f1dede9000e 00:13:40.332 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:40.332 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:40.592 [2024-07-15 14:55:56.453404] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:40.592 [2024-07-15 14:55:56.453457] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:40.592 true 00:13:40.592 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:40.592 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:40.592 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:40.592 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:40.852 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9af7f19-32c5-4419-b156-3f1dede9000e 00:13:41.112 14:55:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:41.112 [2024-07-15 14:55:57.091631] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:41.112 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:41.373 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1764943 00:13:41.373 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:41.373 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:41.373 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1764943 /var/tmp/bdevperf.sock 00:13:41.373 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1764943 ']' 00:13:41.374 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.374 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.374 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.374 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.374 14:55:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:41.374 [2024-07-15 14:55:57.310058] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:41.374 [2024-07-15 14:55:57.310111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764943 ] 00:13:41.374 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.374 [2024-07-15 14:55:57.393146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.633 [2024-07-15 14:55:57.457950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.203 14:55:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.203 14:55:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:42.203 14:55:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:42.462 Nvme0n1 00:13:42.462 14:55:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:42.462 [ 00:13:42.462 { 00:13:42.462 "name": "Nvme0n1", 00:13:42.462 "aliases": [ 00:13:42.462 "f9af7f19-32c5-4419-b156-3f1dede9000e" 00:13:42.462 ], 00:13:42.462 "product_name": "NVMe disk", 00:13:42.462 "block_size": 4096, 00:13:42.462 "num_blocks": 38912, 00:13:42.462 "uuid": "f9af7f19-32c5-4419-b156-3f1dede9000e", 00:13:42.462 "assigned_rate_limits": { 00:13:42.462 "rw_ios_per_sec": 0, 00:13:42.462 "rw_mbytes_per_sec": 0, 00:13:42.462 "r_mbytes_per_sec": 0, 00:13:42.462 "w_mbytes_per_sec": 0 00:13:42.462 }, 00:13:42.462 "claimed": false, 00:13:42.462 "zoned": false, 00:13:42.462 "supported_io_types": { 00:13:42.462 "read": true, 00:13:42.462 "write": true, 00:13:42.462 "unmap": true, 00:13:42.462 "flush": true, 00:13:42.462 "reset": true, 00:13:42.462 "nvme_admin": true, 00:13:42.462 "nvme_io": true, 00:13:42.462 "nvme_io_md": false, 00:13:42.462 "write_zeroes": true, 00:13:42.462 "zcopy": false, 00:13:42.462 "get_zone_info": false, 00:13:42.462 "zone_management": false, 00:13:42.462 "zone_append": false, 00:13:42.462 "compare": true, 00:13:42.462 "compare_and_write": true, 00:13:42.462 "abort": true, 00:13:42.462 "seek_hole": false, 00:13:42.462 "seek_data": false, 00:13:42.462 "copy": true, 00:13:42.462 "nvme_iov_md": false 00:13:42.462 }, 00:13:42.462 "memory_domains": [ 00:13:42.462 { 00:13:42.462 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:42.462 "dma_device_type": 0 00:13:42.462 } 00:13:42.462 ], 00:13:42.462 "driver_specific": { 00:13:42.462 "nvme": [ 00:13:42.462 { 00:13:42.462 "trid": { 00:13:42.462 "trtype": "RDMA", 00:13:42.462 "adrfam": "IPv4", 00:13:42.462 "traddr": "192.168.100.8", 00:13:42.462 "trsvcid": "4420", 00:13:42.462 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:42.462 }, 00:13:42.462 "ctrlr_data": { 00:13:42.462 "cntlid": 1, 00:13:42.462 "vendor_id": "0x8086", 00:13:42.462 "model_number": "SPDK bdev Controller", 00:13:42.462 "serial_number": "SPDK0", 00:13:42.462 "firmware_revision": "24.09", 00:13:42.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.462 "oacs": { 00:13:42.462 "security": 0, 00:13:42.462 "format": 0, 00:13:42.462 "firmware": 0, 00:13:42.462 "ns_manage": 0 00:13:42.462 }, 00:13:42.462 "multi_ctrlr": true, 00:13:42.462 "ana_reporting": false 00:13:42.462 }, 00:13:42.462 "vs": { 00:13:42.462 "nvme_version": "1.3" 00:13:42.462 }, 00:13:42.462 "ns_data": { 00:13:42.462 "id": 1, 00:13:42.462 "can_share": true 00:13:42.462 } 00:13:42.462 } 00:13:42.462 ], 00:13:42.462 "mp_policy": "active_passive" 00:13:42.462 } 00:13:42.462 } 00:13:42.462 ] 00:13:42.462 14:55:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1765084 00:13:42.462 14:55:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:42.462 14:55:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:42.721 Running I/O for 10 seconds... 00:13:43.660 Latency(us) 00:13:43.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.660 Nvme0n1 : 1.00 25823.00 100.87 0.00 0.00 0.00 0.00 0.00 00:13:43.660 =================================================================================================================== 00:13:43.660 Total : 25823.00 100.87 0.00 0.00 0.00 0.00 0.00 00:13:43.660 00:13:44.601 14:56:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:44.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.601 Nvme0n1 : 2.00 26019.00 101.64 0.00 0.00 0.00 0.00 0.00 00:13:44.601 =================================================================================================================== 00:13:44.601 Total : 26019.00 101.64 0.00 0.00 0.00 0.00 0.00 00:13:44.601 00:13:44.601 true 00:13:44.861 14:56:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:44.861 14:56:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:44.861 14:56:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:44.861 14:56:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:44.861 14:56:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1765084 00:13:45.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.799 Nvme0n1 : 3.00 26142.00 102.12 0.00 0.00 0.00 0.00 0.00 00:13:45.799 =================================================================================================================== 00:13:45.799 Total : 26142.00 102.12 0.00 0.00 0.00 0.00 0.00 00:13:45.799 00:13:46.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.826 Nvme0n1 : 4.00 26225.50 102.44 0.00 0.00 0.00 0.00 0.00 00:13:46.826 =================================================================================================================== 00:13:46.826 Total : 26225.50 102.44 0.00 0.00 0.00 0.00 0.00 00:13:46.826 00:13:47.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.784 Nvme0n1 : 5.00 26283.40 102.67 0.00 0.00 0.00 0.00 0.00 00:13:47.784 =================================================================================================================== 00:13:47.784 Total : 26283.40 102.67 0.00 0.00 0.00 0.00 0.00 00:13:47.784 00:13:48.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.724 Nvme0n1 : 6.00 26324.17 102.83 0.00 0.00 0.00 0.00 0.00 00:13:48.724 =================================================================================================================== 00:13:48.724 Total : 26324.17 102.83 0.00 0.00 0.00 0.00 0.00 00:13:48.724 00:13:49.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.662 Nvme0n1 : 7.00 26355.14 102.95 0.00 0.00 0.00 0.00 0.00 00:13:49.662 =================================================================================================================== 00:13:49.663 Total : 26355.14 102.95 0.00 0.00 0.00 0.00 0.00 00:13:49.663 00:13:50.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.605 Nvme0n1 : 8.00 26379.12 103.04 0.00 0.00 0.00 0.00 0.00 00:13:50.605 =================================================================================================================== 00:13:50.605 Total : 26379.12 103.04 0.00 0.00 0.00 0.00 0.00 00:13:50.605 00:13:51.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.547 Nvme0n1 : 9.00 26396.89 103.11 0.00 0.00 0.00 0.00 0.00 00:13:51.547 =================================================================================================================== 00:13:51.547 Total : 26396.89 103.11 0.00 0.00 0.00 0.00 0.00 00:13:51.547 00:13:52.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.932 Nvme0n1 : 10.00 26413.10 103.18 0.00 0.00 0.00 0.00 0.00 00:13:52.932 =================================================================================================================== 00:13:52.932 Total : 26413.10 103.18 0.00 0.00 0.00 0.00 0.00 00:13:52.932 00:13:52.932 00:13:52.932 Latency(us) 00:13:52.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.933 Nvme0n1 : 10.00 26414.36 103.18 0.00 0.00 4842.39 3072.00 16165.55 00:13:52.933 =================================================================================================================== 00:13:52.933 Total : 26414.36 103.18 0.00 0.00 4842.39 3072.00 16165.55 00:13:52.933 0 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1764943 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1764943 ']' 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1764943 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1764943 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1764943' 00:13:52.933 killing process with pid 1764943 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1764943 00:13:52.933 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.933 00:13:52.933 Latency(us) 00:13:52.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.933 =================================================================================================================== 00:13:52.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1764943 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:52.933 14:56:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:53.201 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:53.201 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:53.461 [2024-07-15 14:56:09.433773] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:53.461 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:53.722 request: 00:13:53.722 { 00:13:53.722 "uuid": "414adc64-74c0-4667-8cfa-2f26a794fcf9", 00:13:53.722 "method": "bdev_lvol_get_lvstores", 00:13:53.722 "req_id": 1 00:13:53.722 } 00:13:53.722 Got JSON-RPC error response 00:13:53.722 response: 00:13:53.722 { 00:13:53.722 "code": -19, 00:13:53.722 "message": "No such device" 00:13:53.722 } 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:53.722 aio_bdev 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f9af7f19-32c5-4419-b156-3f1dede9000e 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=f9af7f19-32c5-4419-b156-3f1dede9000e 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:53.722 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:53.982 14:56:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f9af7f19-32c5-4419-b156-3f1dede9000e -t 2000 00:13:54.248 [ 00:13:54.249 { 00:13:54.249 "name": "f9af7f19-32c5-4419-b156-3f1dede9000e", 00:13:54.249 "aliases": [ 00:13:54.249 "lvs/lvol" 00:13:54.249 ], 00:13:54.249 "product_name": "Logical Volume", 00:13:54.249 "block_size": 4096, 00:13:54.249 "num_blocks": 38912, 00:13:54.249 "uuid": "f9af7f19-32c5-4419-b156-3f1dede9000e", 00:13:54.249 "assigned_rate_limits": { 00:13:54.249 "rw_ios_per_sec": 0, 00:13:54.249 "rw_mbytes_per_sec": 0, 00:13:54.249 "r_mbytes_per_sec": 0, 00:13:54.249 "w_mbytes_per_sec": 0 00:13:54.249 }, 00:13:54.249 "claimed": false, 00:13:54.249 "zoned": false, 00:13:54.249 "supported_io_types": { 00:13:54.249 "read": true, 00:13:54.249 "write": true, 00:13:54.249 "unmap": true, 00:13:54.249 "flush": false, 00:13:54.249 "reset": true, 00:13:54.249 "nvme_admin": false, 00:13:54.249 "nvme_io": false, 00:13:54.249 "nvme_io_md": false, 00:13:54.249 "write_zeroes": true, 00:13:54.249 "zcopy": false, 00:13:54.249 "get_zone_info": false, 00:13:54.249 "zone_management": false, 00:13:54.249 "zone_append": false, 00:13:54.249 "compare": false, 00:13:54.249 "compare_and_write": false, 00:13:54.249 "abort": false, 00:13:54.249 "seek_hole": true, 00:13:54.249 "seek_data": true, 00:13:54.249 "copy": false, 00:13:54.249 "nvme_iov_md": false 00:13:54.249 }, 00:13:54.249 "driver_specific": { 00:13:54.249 "lvol": { 00:13:54.249 "lvol_store_uuid": "414adc64-74c0-4667-8cfa-2f26a794fcf9", 00:13:54.249 "base_bdev": "aio_bdev", 00:13:54.249 "thin_provision": false, 00:13:54.249 "num_allocated_clusters": 38, 00:13:54.249 "snapshot": false, 00:13:54.249 "clone": false, 00:13:54.249 "esnap_clone": false 00:13:54.249 } 00:13:54.249 } 00:13:54.249 } 00:13:54.249 ] 00:13:54.249 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:54.249 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:54.249 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:54.249 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:54.249 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:54.249 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:54.509 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:54.509 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9af7f19-32c5-4419-b156-3f1dede9000e 00:13:54.509 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 414adc64-74c0-4667-8cfa-2f26a794fcf9 00:13:54.769 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.030 00:13:55.030 real 0m15.304s 00:13:55.030 user 0m15.282s 00:13:55.030 sys 0m0.992s 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:55.030 ************************************ 00:13:55.030 END TEST lvs_grow_clean 00:13:55.030 ************************************ 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:55.030 ************************************ 00:13:55.030 START TEST lvs_grow_dirty 00:13:55.030 ************************************ 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:55.030 14:56:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.030 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:55.294 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:55.294 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:55.294 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:13:55.294 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:13:55.294 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:55.562 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:55.562 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:55.562 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 lvol 150 00:13:55.823 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=402350f7-4367-46d7-9d9f-94af6b1aaf61 00:13:55.823 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:55.823 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:55.823 [2024-07-15 14:56:11.790791] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:55.823 [2024-07-15 14:56:11.790845] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:55.823 true 00:13:55.823 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:13:55.823 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:56.084 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:56.085 14:56:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:56.085 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 402350f7-4367-46d7-9d9f-94af6b1aaf61 00:13:56.346 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:56.346 [2024-07-15 14:56:12.376937] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:56.346 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1768022 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1768022 /var/tmp/bdevperf.sock 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1768022 ']' 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.607 14:56:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:56.607 [2024-07-15 14:56:12.596724] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:56.607 [2024-07-15 14:56:12.596806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768022 ] 00:13:56.607 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.868 [2024-07-15 14:56:12.678314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.868 [2024-07-15 14:56:12.743158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.439 14:56:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.439 14:56:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:57.439 14:56:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:57.698 Nvme0n1 00:13:57.698 14:56:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:57.958 [ 00:13:57.958 { 00:13:57.958 "name": "Nvme0n1", 00:13:57.958 "aliases": [ 00:13:57.958 "402350f7-4367-46d7-9d9f-94af6b1aaf61" 00:13:57.958 ], 00:13:57.958 "product_name": "NVMe disk", 00:13:57.958 "block_size": 4096, 00:13:57.958 "num_blocks": 38912, 00:13:57.958 "uuid": "402350f7-4367-46d7-9d9f-94af6b1aaf61", 00:13:57.958 "assigned_rate_limits": { 00:13:57.958 "rw_ios_per_sec": 0, 00:13:57.958 "rw_mbytes_per_sec": 0, 00:13:57.958 "r_mbytes_per_sec": 0, 00:13:57.958 "w_mbytes_per_sec": 0 00:13:57.958 }, 00:13:57.958 "claimed": false, 00:13:57.958 "zoned": false, 00:13:57.958 "supported_io_types": { 00:13:57.958 "read": true, 00:13:57.958 "write": true, 00:13:57.958 "unmap": true, 00:13:57.958 "flush": true, 00:13:57.958 "reset": true, 00:13:57.958 "nvme_admin": true, 00:13:57.958 "nvme_io": true, 00:13:57.958 "nvme_io_md": false, 00:13:57.958 "write_zeroes": true, 00:13:57.958 "zcopy": false, 00:13:57.958 "get_zone_info": false, 00:13:57.958 "zone_management": false, 00:13:57.958 "zone_append": false, 00:13:57.958 "compare": true, 00:13:57.958 "compare_and_write": true, 00:13:57.958 "abort": true, 00:13:57.958 "seek_hole": false, 00:13:57.958 "seek_data": false, 00:13:57.958 "copy": true, 00:13:57.958 "nvme_iov_md": false 00:13:57.958 }, 00:13:57.958 "memory_domains": [ 00:13:57.958 { 00:13:57.958 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:57.958 "dma_device_type": 0 00:13:57.958 } 00:13:57.958 ], 00:13:57.958 "driver_specific": { 00:13:57.958 "nvme": [ 00:13:57.958 { 00:13:57.958 "trid": { 00:13:57.958 "trtype": "RDMA", 00:13:57.958 "adrfam": "IPv4", 00:13:57.958 "traddr": "192.168.100.8", 00:13:57.958 "trsvcid": "4420", 00:13:57.958 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:57.958 }, 00:13:57.958 "ctrlr_data": { 00:13:57.958 "cntlid": 1, 00:13:57.958 "vendor_id": "0x8086", 00:13:57.958 "model_number": "SPDK bdev Controller", 00:13:57.958 "serial_number": "SPDK0", 00:13:57.958 "firmware_revision": "24.09", 00:13:57.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:57.958 "oacs": { 00:13:57.958 "security": 0, 00:13:57.958 "format": 0, 00:13:57.958 "firmware": 0, 00:13:57.958 "ns_manage": 0 00:13:57.958 }, 00:13:57.958 "multi_ctrlr": true, 00:13:57.958 "ana_reporting": false 00:13:57.958 }, 00:13:57.958 "vs": { 00:13:57.958 "nvme_version": "1.3" 00:13:57.958 }, 00:13:57.958 "ns_data": { 00:13:57.958 "id": 1, 00:13:57.958 "can_share": true 00:13:57.958 } 00:13:57.958 } 00:13:57.958 ], 00:13:57.958 "mp_policy": "active_passive" 00:13:57.958 } 00:13:57.958 } 00:13:57.958 ] 00:13:57.958 14:56:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1768162 00:13:57.958 14:56:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:57.958 14:56:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:57.958 Running I/O for 10 seconds... 00:13:58.899 Latency(us) 00:13:58.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.899 Nvme0n1 : 1.00 25696.00 100.38 0.00 0.00 0.00 0.00 0.00 00:13:58.899 =================================================================================================================== 00:13:58.899 Total : 25696.00 100.38 0.00 0.00 0.00 0.00 0.00 00:13:58.899 00:13:59.840 14:56:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:13:59.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.840 Nvme0n1 : 2.00 25999.50 101.56 0.00 0.00 0.00 0.00 0.00 00:13:59.840 =================================================================================================================== 00:13:59.840 Total : 25999.50 101.56 0.00 0.00 0.00 0.00 0.00 00:13:59.840 00:14:00.101 true 00:14:00.101 14:56:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:00.101 14:56:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:00.101 14:56:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:00.101 14:56:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:00.101 14:56:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1768162 00:14:01.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.044 Nvme0n1 : 3.00 26122.33 102.04 0.00 0.00 0.00 0.00 0.00 00:14:01.044 =================================================================================================================== 00:14:01.044 Total : 26122.33 102.04 0.00 0.00 0.00 0.00 0.00 00:14:01.044 00:14:01.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.987 Nvme0n1 : 4.00 26200.00 102.34 0.00 0.00 0.00 0.00 0.00 00:14:01.987 =================================================================================================================== 00:14:01.987 Total : 26200.00 102.34 0.00 0.00 0.00 0.00 0.00 00:14:01.987 00:14:02.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.929 Nvme0n1 : 5.00 26259.00 102.57 0.00 0.00 0.00 0.00 0.00 00:14:02.929 =================================================================================================================== 00:14:02.929 Total : 26259.00 102.57 0.00 0.00 0.00 0.00 0.00 00:14:02.929 00:14:03.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.871 Nvme0n1 : 6.00 26293.50 102.71 0.00 0.00 0.00 0.00 0.00 00:14:03.871 =================================================================================================================== 00:14:03.871 Total : 26293.50 102.71 0.00 0.00 0.00 0.00 0.00 00:14:03.871 00:14:05.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.254 Nvme0n1 : 7.00 26303.71 102.75 0.00 0.00 0.00 0.00 0.00 00:14:05.254 =================================================================================================================== 00:14:05.254 Total : 26303.71 102.75 0.00 0.00 0.00 0.00 0.00 00:14:05.254 00:14:06.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.194 Nvme0n1 : 8.00 26328.12 102.84 0.00 0.00 0.00 0.00 0.00 00:14:06.194 =================================================================================================================== 00:14:06.194 Total : 26328.12 102.84 0.00 0.00 0.00 0.00 0.00 00:14:06.194 00:14:07.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.136 Nvme0n1 : 9.00 26350.33 102.93 0.00 0.00 0.00 0.00 0.00 00:14:07.136 =================================================================================================================== 00:14:07.136 Total : 26350.33 102.93 0.00 0.00 0.00 0.00 0.00 00:14:07.136 00:14:08.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.076 Nvme0n1 : 10.00 26370.90 103.01 0.00 0.00 0.00 0.00 0.00 00:14:08.076 =================================================================================================================== 00:14:08.076 Total : 26370.90 103.01 0.00 0.00 0.00 0.00 0.00 00:14:08.076 00:14:08.076 00:14:08.076 Latency(us) 00:14:08.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.076 Nvme0n1 : 10.00 26372.00 103.02 0.00 0.00 4850.14 3372.37 19770.03 00:14:08.076 =================================================================================================================== 00:14:08.076 Total : 26372.00 103.02 0.00 0.00 4850.14 3372.37 19770.03 00:14:08.076 0 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1768022 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1768022 ']' 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1768022 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1768022 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1768022' 00:14:08.076 killing process with pid 1768022 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1768022 00:14:08.076 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.076 00:14:08.076 Latency(us) 00:14:08.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.076 =================================================================================================================== 00:14:08.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:08.076 14:56:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1768022 00:14:08.076 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:08.337 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1764352 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1764352 00:14:08.598 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1764352 Killed "${NVMF_APP[@]}" "$@" 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.598 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1770448 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1770448 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1770448 ']' 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.859 14:56:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:08.859 [2024-07-15 14:56:24.709569] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:08.859 [2024-07-15 14:56:24.709627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.859 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.859 [2024-07-15 14:56:24.776473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.859 [2024-07-15 14:56:24.841065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.859 [2024-07-15 14:56:24.841100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.859 [2024-07-15 14:56:24.841107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.859 [2024-07-15 14:56:24.841114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.859 [2024-07-15 14:56:24.841119] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.859 [2024-07-15 14:56:24.841142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.431 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.431 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:09.431 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.431 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.431 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:09.692 [2024-07-15 14:56:25.653827] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:09.692 [2024-07-15 14:56:25.653965] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:09.692 [2024-07-15 14:56:25.653996] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 402350f7-4367-46d7-9d9f-94af6b1aaf61 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=402350f7-4367-46d7-9d9f-94af6b1aaf61 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:09.692 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:09.952 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 402350f7-4367-46d7-9d9f-94af6b1aaf61 -t 2000 00:14:09.952 [ 00:14:09.952 { 00:14:09.952 "name": "402350f7-4367-46d7-9d9f-94af6b1aaf61", 00:14:09.952 "aliases": [ 00:14:09.952 "lvs/lvol" 00:14:09.952 ], 00:14:09.952 "product_name": "Logical Volume", 00:14:09.953 "block_size": 4096, 00:14:09.953 "num_blocks": 38912, 00:14:09.953 "uuid": "402350f7-4367-46d7-9d9f-94af6b1aaf61", 00:14:09.953 "assigned_rate_limits": { 00:14:09.953 "rw_ios_per_sec": 0, 00:14:09.953 "rw_mbytes_per_sec": 0, 00:14:09.953 "r_mbytes_per_sec": 0, 00:14:09.953 "w_mbytes_per_sec": 0 00:14:09.953 }, 00:14:09.953 "claimed": false, 00:14:09.953 "zoned": false, 00:14:09.953 "supported_io_types": { 00:14:09.953 "read": true, 00:14:09.953 "write": true, 00:14:09.953 "unmap": true, 00:14:09.953 "flush": false, 00:14:09.953 "reset": true, 00:14:09.953 "nvme_admin": false, 00:14:09.953 "nvme_io": false, 00:14:09.953 "nvme_io_md": false, 00:14:09.953 "write_zeroes": true, 00:14:09.953 "zcopy": false, 00:14:09.953 "get_zone_info": false, 00:14:09.953 "zone_management": false, 00:14:09.953 "zone_append": false, 00:14:09.953 "compare": false, 00:14:09.953 "compare_and_write": false, 00:14:09.953 "abort": false, 00:14:09.953 "seek_hole": true, 00:14:09.953 "seek_data": true, 00:14:09.953 "copy": false, 00:14:09.953 "nvme_iov_md": false 00:14:09.953 }, 00:14:09.953 "driver_specific": { 00:14:09.953 "lvol": { 00:14:09.953 "lvol_store_uuid": "3d7130c7-1e58-4805-bdc1-dcf3c51d57b8", 00:14:09.953 "base_bdev": "aio_bdev", 00:14:09.953 "thin_provision": false, 00:14:09.953 "num_allocated_clusters": 38, 00:14:09.953 "snapshot": false, 00:14:09.953 "clone": false, 00:14:09.953 "esnap_clone": false 00:14:09.953 } 00:14:09.953 } 00:14:09.953 } 00:14:09.953 ] 00:14:09.953 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:09.953 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:09.953 14:56:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:10.214 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:10.214 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:10.214 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:10.475 [2024-07-15 14:56:26.417718] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:10.475 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:10.736 request: 00:14:10.736 { 00:14:10.736 "uuid": "3d7130c7-1e58-4805-bdc1-dcf3c51d57b8", 00:14:10.736 "method": "bdev_lvol_get_lvstores", 00:14:10.736 "req_id": 1 00:14:10.736 } 00:14:10.736 Got JSON-RPC error response 00:14:10.736 response: 00:14:10.736 { 00:14:10.736 "code": -19, 00:14:10.736 "message": "No such device" 00:14:10.736 } 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:10.736 aio_bdev 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 402350f7-4367-46d7-9d9f-94af6b1aaf61 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=402350f7-4367-46d7-9d9f-94af6b1aaf61 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:10.736 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:10.997 14:56:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 402350f7-4367-46d7-9d9f-94af6b1aaf61 -t 2000 00:14:11.259 [ 00:14:11.259 { 00:14:11.259 "name": "402350f7-4367-46d7-9d9f-94af6b1aaf61", 00:14:11.259 "aliases": [ 00:14:11.259 "lvs/lvol" 00:14:11.259 ], 00:14:11.259 "product_name": "Logical Volume", 00:14:11.259 "block_size": 4096, 00:14:11.259 "num_blocks": 38912, 00:14:11.259 "uuid": "402350f7-4367-46d7-9d9f-94af6b1aaf61", 00:14:11.259 "assigned_rate_limits": { 00:14:11.259 "rw_ios_per_sec": 0, 00:14:11.259 "rw_mbytes_per_sec": 0, 00:14:11.259 "r_mbytes_per_sec": 0, 00:14:11.259 "w_mbytes_per_sec": 0 00:14:11.259 }, 00:14:11.259 "claimed": false, 00:14:11.259 "zoned": false, 00:14:11.259 "supported_io_types": { 00:14:11.259 "read": true, 00:14:11.259 "write": true, 00:14:11.259 "unmap": true, 00:14:11.259 "flush": false, 00:14:11.259 "reset": true, 00:14:11.259 "nvme_admin": false, 00:14:11.259 "nvme_io": false, 00:14:11.259 "nvme_io_md": false, 00:14:11.259 "write_zeroes": true, 00:14:11.259 "zcopy": false, 00:14:11.259 "get_zone_info": false, 00:14:11.259 "zone_management": false, 00:14:11.259 "zone_append": false, 00:14:11.259 "compare": false, 00:14:11.259 "compare_and_write": false, 00:14:11.259 "abort": false, 00:14:11.259 "seek_hole": true, 00:14:11.259 "seek_data": true, 00:14:11.259 "copy": false, 00:14:11.259 "nvme_iov_md": false 00:14:11.259 }, 00:14:11.259 "driver_specific": { 00:14:11.259 "lvol": { 00:14:11.259 "lvol_store_uuid": "3d7130c7-1e58-4805-bdc1-dcf3c51d57b8", 00:14:11.259 "base_bdev": "aio_bdev", 00:14:11.259 "thin_provision": false, 00:14:11.259 "num_allocated_clusters": 38, 00:14:11.259 "snapshot": false, 00:14:11.259 "clone": false, 00:14:11.259 "esnap_clone": false 00:14:11.259 } 00:14:11.259 } 00:14:11.259 } 00:14:11.259 ] 00:14:11.259 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:11.259 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:11.259 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:11.259 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:11.259 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:11.259 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:11.520 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:11.520 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 402350f7-4367-46d7-9d9f-94af6b1aaf61 00:14:11.520 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d7130c7-1e58-4805-bdc1-dcf3c51d57b8 00:14:11.781 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:12.041 00:14:12.041 real 0m16.895s 00:14:12.041 user 0m44.884s 00:14:12.041 sys 0m2.333s 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:12.041 ************************************ 00:14:12.041 END TEST lvs_grow_dirty 00:14:12.041 ************************************ 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:12.041 nvmf_trace.0 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.041 14:56:27 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:12.041 rmmod nvme_rdma 00:14:12.041 rmmod nvme_fabrics 00:14:12.041 14:56:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.041 14:56:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1770448 ']' 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1770448 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1770448 ']' 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1770448 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1770448 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1770448' 00:14:12.042 killing process with pid 1770448 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1770448 00:14:12.042 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1770448 00:14:12.302 14:56:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.302 14:56:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:12.302 00:14:12.302 real 0m41.774s 00:14:12.302 user 1m6.435s 00:14:12.302 sys 0m9.489s 00:14:12.302 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.302 14:56:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:12.302 ************************************ 00:14:12.302 END TEST nvmf_lvs_grow 00:14:12.302 ************************************ 00:14:12.302 14:56:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:12.302 14:56:28 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:12.302 14:56:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:12.302 14:56:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.302 14:56:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:12.302 ************************************ 00:14:12.302 START TEST nvmf_bdev_io_wait 00:14:12.302 ************************************ 00:14:12.302 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:12.563 * Looking for test storage... 00:14:12.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:12.563 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.564 14:56:28 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.709 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:20.710 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:20.710 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:20.710 Found net devices under 0000:98:00.0: mlx_0_0 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:20.710 Found net devices under 0000:98:00.1: mlx_0_1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:20.710 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:20.710 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:14:20.710 altname enp152s0f0np0 00:14:20.710 altname ens817f0np0 00:14:20.710 inet 192.168.100.8/24 scope global mlx_0_0 00:14:20.710 valid_lft forever preferred_lft forever 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:20.710 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:20.710 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:14:20.710 altname enp152s0f1np1 00:14:20.710 altname ens817f1np1 00:14:20.710 inet 192.168.100.9/24 scope global mlx_0_1 00:14:20.710 valid_lft forever preferred_lft forever 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:20.710 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:20.711 192.168.100.9' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:20.711 192.168.100.9' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:20.711 192.168.100.9' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1775330 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1775330 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1775330 ']' 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.711 14:56:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:20.711 [2024-07-15 14:56:36.556552] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:20.711 [2024-07-15 14:56:36.556620] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.711 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.711 [2024-07-15 14:56:36.627934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.711 [2024-07-15 14:56:36.703105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.711 [2024-07-15 14:56:36.703144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.711 [2024-07-15 14:56:36.703152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.711 [2024-07-15 14:56:36.703159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.711 [2024-07-15 14:56:36.703164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.711 [2024-07-15 14:56:36.703270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.711 [2024-07-15 14:56:36.703493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.711 [2024-07-15 14:56:36.703494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.711 [2024-07-15 14:56:36.703344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 [2024-07-15 14:56:37.482681] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21693a0/0x216d890) succeed. 00:14:21.657 [2024-07-15 14:56:37.497203] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x216a9e0/0x21aef20) succeed. 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 Malloc0 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:21.657 [2024-07-15 14:56:37.683496] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1775603 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1775605 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.657 { 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme$subsystem", 00:14:21.657 "trtype": "$TEST_TRANSPORT", 00:14:21.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "$NVMF_PORT", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.657 "hdgst": ${hdgst:-false}, 00:14:21.657 "ddgst": ${ddgst:-false} 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 } 00:14:21.657 EOF 00:14:21.657 )") 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1775607 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.657 { 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme$subsystem", 00:14:21.657 "trtype": "$TEST_TRANSPORT", 00:14:21.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "$NVMF_PORT", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.657 "hdgst": ${hdgst:-false}, 00:14:21.657 "ddgst": ${ddgst:-false} 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 } 00:14:21.657 EOF 00:14:21.657 )") 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1775610 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.657 { 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme$subsystem", 00:14:21.657 "trtype": "$TEST_TRANSPORT", 00:14:21.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "$NVMF_PORT", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.657 "hdgst": ${hdgst:-false}, 00:14:21.657 "ddgst": ${ddgst:-false} 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 } 00:14:21.657 EOF 00:14:21.657 )") 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.657 { 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme$subsystem", 00:14:21.657 "trtype": "$TEST_TRANSPORT", 00:14:21.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "$NVMF_PORT", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.657 "hdgst": ${hdgst:-false}, 00:14:21.657 "ddgst": ${ddgst:-false} 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 } 00:14:21.657 EOF 00:14:21.657 )") 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1775603 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme1", 00:14:21.657 "trtype": "rdma", 00:14:21.657 "traddr": "192.168.100.8", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "4420", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.657 "hdgst": false, 00:14:21.657 "ddgst": false 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 }' 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme1", 00:14:21.657 "trtype": "rdma", 00:14:21.657 "traddr": "192.168.100.8", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "4420", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.657 "hdgst": false, 00:14:21.657 "ddgst": false 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 }' 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme1", 00:14:21.657 "trtype": "rdma", 00:14:21.657 "traddr": "192.168.100.8", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "4420", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.657 "hdgst": false, 00:14:21.657 "ddgst": false 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 }' 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:21.657 14:56:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.657 "params": { 00:14:21.657 "name": "Nvme1", 00:14:21.657 "trtype": "rdma", 00:14:21.657 "traddr": "192.168.100.8", 00:14:21.657 "adrfam": "ipv4", 00:14:21.657 "trsvcid": "4420", 00:14:21.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.657 "hdgst": false, 00:14:21.657 "ddgst": false 00:14:21.657 }, 00:14:21.657 "method": "bdev_nvme_attach_controller" 00:14:21.657 }' 00:14:21.922 [2024-07-15 14:56:37.735520] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:21.922 [2024-07-15 14:56:37.735521] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:21.922 [2024-07-15 14:56:37.735574] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 14:56:37.735575] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:21.922 --proc-type=auto ] 00:14:21.922 [2024-07-15 14:56:37.736651] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:21.922 [2024-07-15 14:56:37.736694] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:21.922 [2024-07-15 14:56:37.738699] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:21.922 [2024-07-15 14:56:37.738749] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:21.922 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.922 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.922 [2024-07-15 14:56:37.897272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.922 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.922 [2024-07-15 14:56:37.948054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:21.922 [2024-07-15 14:56:37.952967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.923 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.183 [2024-07-15 14:56:38.002726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:22.183 [2024-07-15 14:56:38.015719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.183 [2024-07-15 14:56:38.064745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.183 [2024-07-15 14:56:38.067766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:22.183 [2024-07-15 14:56:38.114339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:22.183 Running I/O for 1 seconds... 00:14:22.183 Running I/O for 1 seconds... 00:14:22.183 Running I/O for 1 seconds... 00:14:22.443 Running I/O for 1 seconds... 00:14:23.434 00:14:23.434 Latency(us) 00:14:23.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.434 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:23.434 Nvme1n1 : 1.00 21586.25 84.32 0.00 0.00 5913.33 4041.39 15182.51 00:14:23.434 =================================================================================================================== 00:14:23.434 Total : 21586.25 84.32 0.00 0.00 5913.33 4041.39 15182.51 00:14:23.434 00:14:23.434 Latency(us) 00:14:23.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.434 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:23.434 Nvme1n1 : 1.01 16276.54 63.58 0.00 0.00 7838.99 5024.43 20425.39 00:14:23.434 =================================================================================================================== 00:14:23.434 Total : 16276.54 63.58 0.00 0.00 7838.99 5024.43 20425.39 00:14:23.434 00:14:23.434 Latency(us) 00:14:23.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.434 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:23.434 Nvme1n1 : 1.00 25045.65 97.83 0.00 0.00 5097.94 3877.55 16274.77 00:14:23.434 =================================================================================================================== 00:14:23.434 Total : 25045.65 97.83 0.00 0.00 5097.94 3877.55 16274.77 00:14:23.434 00:14:23.434 Latency(us) 00:14:23.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.434 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:23.434 Nvme1n1 : 1.00 189351.81 739.66 0.00 0.00 673.09 266.24 2389.33 00:14:23.434 =================================================================================================================== 00:14:23.434 Total : 189351.81 739.66 0.00 0.00 673.09 266.24 2389.33 00:14:23.434 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1775605 00:14:23.434 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1775607 00:14:23.434 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1775610 00:14:23.434 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.434 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.434 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:23.744 rmmod nvme_rdma 00:14:23.744 rmmod nvme_fabrics 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1775330 ']' 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1775330 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1775330 ']' 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1775330 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775330 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775330' 00:14:23.744 killing process with pid 1775330 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1775330 00:14:23.744 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1775330 00:14:24.005 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.005 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:24.005 00:14:24.005 real 0m11.481s 00:14:24.005 user 0m19.984s 00:14:24.005 sys 0m7.151s 00:14:24.005 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.005 14:56:39 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:24.005 ************************************ 00:14:24.005 END TEST nvmf_bdev_io_wait 00:14:24.005 ************************************ 00:14:24.005 14:56:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:24.005 14:56:39 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:14:24.005 14:56:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:24.005 14:56:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.005 14:56:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:24.005 ************************************ 00:14:24.005 START TEST nvmf_queue_depth 00:14:24.005 ************************************ 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:14:24.005 * Looking for test storage... 00:14:24.005 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.005 14:56:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.005 14:56:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.005 14:56:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.005 14:56:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.005 14:56:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:32.142 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:32.142 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:32.142 Found net devices under 0000:98:00.0: mlx_0_0 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:32.142 Found net devices under 0000:98:00.1: mlx_0_1 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:32.142 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:32.143 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:32.143 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:14:32.143 altname enp152s0f0np0 00:14:32.143 altname ens817f0np0 00:14:32.143 inet 192.168.100.8/24 scope global mlx_0_0 00:14:32.143 valid_lft forever preferred_lft forever 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:32.143 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:32.143 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:14:32.143 altname enp152s0f1np1 00:14:32.143 altname ens817f1np1 00:14:32.143 inet 192.168.100.9/24 scope global mlx_0_1 00:14:32.143 valid_lft forever preferred_lft forever 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:32.143 14:56:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:32.143 192.168.100.9' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:32.143 192.168.100.9' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:32.143 192.168.100.9' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1780305 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1780305 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1780305 ']' 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:32.143 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:32.143 [2024-07-15 14:56:48.145590] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:32.143 [2024-07-15 14:56:48.145649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.143 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.405 [2024-07-15 14:56:48.230310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.405 [2024-07-15 14:56:48.323043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.405 [2024-07-15 14:56:48.323104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.405 [2024-07-15 14:56:48.323113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.405 [2024-07-15 14:56:48.323120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.405 [2024-07-15 14:56:48.323126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.405 [2024-07-15 14:56:48.323151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.978 14:56:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:32.978 [2024-07-15 14:56:49.000812] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19fb360/0x19ff850) succeed. 00:14:32.978 [2024-07-15 14:56:49.014929] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19fc860/0x1a40ee0) succeed. 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.239 Malloc0 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.239 [2024-07-15 14:56:49.112527] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1780551 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1780551 /var/tmp/bdevperf.sock 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1780551 ']' 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.239 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:33.239 [2024-07-15 14:56:49.167754] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:33.239 [2024-07-15 14:56:49.167823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780551 ] 00:14:33.239 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.239 [2024-07-15 14:56:49.238539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.500 [2024-07-15 14:56:49.313267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.071 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.071 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:34.071 14:56:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:34.071 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.071 14:56:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:34.071 NVMe0n1 00:14:34.071 14:56:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.071 14:56:50 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.331 Running I/O for 10 seconds... 00:14:44.327 00:14:44.327 Latency(us) 00:14:44.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.327 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:44.327 Verification LBA range: start 0x0 length 0x4000 00:14:44.328 NVMe0n1 : 10.04 15603.25 60.95 0.00 0.00 65448.41 21408.43 45656.75 00:14:44.328 =================================================================================================================== 00:14:44.328 Total : 15603.25 60.95 0.00 0.00 65448.41 21408.43 45656.75 00:14:44.328 0 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1780551 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1780551 ']' 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1780551 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1780551 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1780551' 00:14:44.328 killing process with pid 1780551 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1780551 00:14:44.328 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.328 00:14:44.328 Latency(us) 00:14:44.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.328 =================================================================================================================== 00:14:44.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.328 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1780551 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:44.589 rmmod nvme_rdma 00:14:44.589 rmmod nvme_fabrics 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1780305 ']' 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1780305 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1780305 ']' 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1780305 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1780305 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1780305' 00:14:44.589 killing process with pid 1780305 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1780305 00:14:44.589 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1780305 00:14:44.850 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:44.850 14:57:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:44.850 00:14:44.850 real 0m20.826s 00:14:44.850 user 0m26.385s 00:14:44.850 sys 0m6.642s 00:14:44.850 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.850 14:57:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 ************************************ 00:14:44.850 END TEST nvmf_queue_depth 00:14:44.850 ************************************ 00:14:44.850 14:57:00 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:44.850 14:57:00 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:44.850 14:57:00 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:44.850 14:57:00 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.850 14:57:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:44.850 ************************************ 00:14:44.850 START TEST nvmf_target_multipath 00:14:44.850 ************************************ 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:44.850 * Looking for test storage... 00:14:44.850 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:44.850 14:57:00 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:44.851 14:57:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:52.993 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:52.994 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:52.994 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:52.994 Found net devices under 0000:98:00.0: mlx_0_0 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:52.994 Found net devices under 0000:98:00.1: mlx_0_1 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:52.994 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:52.994 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:14:52.994 altname enp152s0f0np0 00:14:52.994 altname ens817f0np0 00:14:52.994 inet 192.168.100.8/24 scope global mlx_0_0 00:14:52.994 valid_lft forever preferred_lft forever 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:52.994 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:52.994 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:14:52.994 altname enp152s0f1np1 00:14:52.994 altname ens817f1np1 00:14:52.994 inet 192.168.100.9/24 scope global mlx_0_1 00:14:52.994 valid_lft forever preferred_lft forever 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:52.994 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:52.995 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:52.995 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:52.995 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:52.995 14:57:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:52.995 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:53.256 192.168.100.9' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:53.256 192.168.100.9' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:53.256 192.168.100.9' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:14:53.256 run this test only with TCP transport for now 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:53.256 rmmod nvme_rdma 00:14:53.256 rmmod nvme_fabrics 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:53.256 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.257 14:57:09 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:53.257 00:14:53.257 real 0m8.412s 00:14:53.257 user 0m2.436s 00:14:53.257 sys 0m6.058s 00:14:53.257 14:57:09 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.257 14:57:09 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:53.257 ************************************ 00:14:53.257 END TEST nvmf_target_multipath 00:14:53.257 ************************************ 00:14:53.257 14:57:09 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:53.257 14:57:09 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:53.257 14:57:09 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:53.257 14:57:09 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.257 14:57:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:53.257 ************************************ 00:14:53.257 START TEST nvmf_zcopy 00:14:53.257 ************************************ 00:14:53.257 14:57:09 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:53.518 * Looking for test storage... 00:14:53.518 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.518 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.519 14:57:09 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:01.660 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:01.660 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:01.660 Found net devices under 0000:98:00.0: mlx_0_0 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:01.660 Found net devices under 0000:98:00.1: mlx_0_1 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:01.660 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:01.661 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:01.661 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:01.661 altname enp152s0f0np0 00:15:01.661 altname ens817f0np0 00:15:01.661 inet 192.168.100.8/24 scope global mlx_0_0 00:15:01.661 valid_lft forever preferred_lft forever 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:01.661 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:01.661 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:01.661 altname enp152s0f1np1 00:15:01.661 altname ens817f1np1 00:15:01.661 inet 192.168.100.9/24 scope global mlx_0_1 00:15:01.661 valid_lft forever preferred_lft forever 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:01.661 192.168.100.9' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:01.661 192.168.100.9' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:01.661 192.168.100.9' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1791885 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1791885 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1791885 ']' 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.661 14:57:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:01.661 [2024-07-15 14:57:17.536542] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:01.661 [2024-07-15 14:57:17.536600] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.661 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.661 [2024-07-15 14:57:17.625498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.661 [2024-07-15 14:57:17.718684] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.661 [2024-07-15 14:57:17.718740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.661 [2024-07-15 14:57:17.718749] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.661 [2024-07-15 14:57:17.718756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.661 [2024-07-15 14:57:17.718762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.661 [2024-07-15 14:57:17.718790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:15:02.602 Unsupported transport: rdma 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # type=--id 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # id=0 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:02.602 nvmf_trace.0 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@821 -- # return 0 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.602 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:02.602 rmmod nvme_rdma 00:15:02.602 rmmod nvme_fabrics 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1791885 ']' 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1791885 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1791885 ']' 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1791885 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1791885 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1791885' 00:15:02.603 killing process with pid 1791885 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1791885 00:15:02.603 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1791885 00:15:02.862 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.862 14:57:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:02.862 00:15:02.862 real 0m9.442s 00:15:02.862 user 0m3.627s 00:15:02.862 sys 0m6.452s 00:15:02.863 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.863 14:57:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:02.863 ************************************ 00:15:02.863 END TEST nvmf_zcopy 00:15:02.863 ************************************ 00:15:02.863 14:57:18 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:02.863 14:57:18 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:02.863 14:57:18 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:02.863 14:57:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.863 14:57:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:02.863 ************************************ 00:15:02.863 START TEST nvmf_nmic 00:15:02.863 ************************************ 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:02.863 * Looking for test storage... 00:15:02.863 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.863 14:57:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:10.994 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:10.994 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:10.994 Found net devices under 0000:98:00.0: mlx_0_0 00:15:10.994 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:10.995 Found net devices under 0000:98:00.1: mlx_0_1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:10.995 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:10.995 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:10.995 altname enp152s0f0np0 00:15:10.995 altname ens817f0np0 00:15:10.995 inet 192.168.100.8/24 scope global mlx_0_0 00:15:10.995 valid_lft forever preferred_lft forever 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:10.995 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:10.995 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:10.995 altname enp152s0f1np1 00:15:10.995 altname ens817f1np1 00:15:10.995 inet 192.168.100.9/24 scope global mlx_0_1 00:15:10.995 valid_lft forever preferred_lft forever 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:10.995 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:10.996 192.168.100.9' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:10.996 192.168.100.9' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:10.996 192.168.100.9' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1796274 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1796274 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1796274 ']' 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.996 14:57:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:10.996 [2024-07-15 14:57:27.018162] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:10.996 [2024-07-15 14:57:27.018238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.996 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.257 [2024-07-15 14:57:27.096109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.257 [2024-07-15 14:57:27.172154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.257 [2024-07-15 14:57:27.172196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.257 [2024-07-15 14:57:27.172207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.257 [2024-07-15 14:57:27.172214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.257 [2024-07-15 14:57:27.172220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.257 [2024-07-15 14:57:27.172310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.257 [2024-07-15 14:57:27.172548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.257 [2024-07-15 14:57:27.172567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.257 [2024-07-15 14:57:27.172574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.828 14:57:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:11.828 [2024-07-15 14:57:27.886610] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8a6200/0x8aa6f0) succeed. 00:15:12.088 [2024-07-15 14:57:27.900503] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8a7840/0x8ebd80) succeed. 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 Malloc0 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 [2024-07-15 14:57:28.075991] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:12.088 test case1: single bdev can't be used in multiple subsystems 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 [2024-07-15 14:57:28.111731] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:12.088 [2024-07-15 14:57:28.111750] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:12.088 [2024-07-15 14:57:28.111758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:12.088 request: 00:15:12.088 { 00:15:12.088 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:12.088 "namespace": { 00:15:12.088 "bdev_name": "Malloc0", 00:15:12.088 "no_auto_visible": false 00:15:12.088 }, 00:15:12.088 "method": "nvmf_subsystem_add_ns", 00:15:12.088 "req_id": 1 00:15:12.088 } 00:15:12.088 Got JSON-RPC error response 00:15:12.088 response: 00:15:12.088 { 00:15:12.088 "code": -32602, 00:15:12.088 "message": "Invalid parameters" 00:15:12.088 } 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:12.088 Adding namespace failed - expected result. 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:12.088 test case2: host connect to nvmf target in multiple paths 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:12.088 [2024-07-15 14:57:28.123798] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.088 14:57:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:14.000 14:57:29 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:15:15.382 14:57:31 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.382 14:57:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:15.382 14:57:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.382 14:57:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:15.382 14:57:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:17.316 14:57:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:17.316 14:57:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:17.316 14:57:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.316 14:57:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:17.316 14:57:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.316 14:57:33 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:17.316 14:57:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:17.316 [global] 00:15:17.316 thread=1 00:15:17.316 invalidate=1 00:15:17.316 rw=write 00:15:17.316 time_based=1 00:15:17.316 runtime=1 00:15:17.316 ioengine=libaio 00:15:17.316 direct=1 00:15:17.316 bs=4096 00:15:17.316 iodepth=1 00:15:17.316 norandommap=0 00:15:17.316 numjobs=1 00:15:17.316 00:15:17.316 verify_dump=1 00:15:17.316 verify_backlog=512 00:15:17.316 verify_state_save=0 00:15:17.316 do_verify=1 00:15:17.316 verify=crc32c-intel 00:15:17.316 [job0] 00:15:17.316 filename=/dev/nvme0n1 00:15:17.316 Could not set queue depth (nvme0n1) 00:15:17.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.577 fio-3.35 00:15:17.577 Starting 1 thread 00:15:18.520 00:15:18.520 job0: (groupid=0, jobs=1): err= 0: pid=1797792: Mon Jul 15 14:57:34 2024 00:15:18.520 read: IOPS=8038, BW=31.4MiB/s (32.9MB/s)(31.4MiB/1001msec) 00:15:18.520 slat (nsec): min=5622, max=27718, avg=6030.30, stdev=692.68 00:15:18.520 clat (nsec): min=33023, max=82031, avg=52666.61, stdev=3404.72 00:15:18.520 lat (nsec): min=51140, max=88162, avg=58696.91, stdev=3420.50 00:15:18.520 clat percentiles (nsec): 00:15:18.520 | 1.00th=[46848], 5.00th=[47872], 10.00th=[48384], 20.00th=[49920], 00:15:18.520 | 30.00th=[50432], 40.00th=[51456], 50.00th=[52480], 60.00th=[52992], 00:15:18.520 | 70.00th=[54016], 80.00th=[55552], 90.00th=[57600], 95.00th=[58624], 00:15:18.520 | 99.00th=[61184], 99.50th=[62208], 99.90th=[65280], 99.95th=[67072], 00:15:18.520 | 99.99th=[82432] 00:15:18.520 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:15:18.520 slat (nsec): min=7773, max=47030, avg=8480.18, stdev=937.05 00:15:18.520 clat (usec): min=34, max=221, avg=51.72, stdev= 7.15 00:15:18.520 lat (usec): min=51, max=230, avg=60.20, stdev= 7.31 00:15:18.520 clat percentiles (usec): 00:15:18.520 | 1.00th=[ 45], 5.00th=[ 47], 10.00th=[ 47], 20.00th=[ 48], 00:15:18.520 | 30.00th=[ 49], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:15:18.520 | 70.00th=[ 53], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 59], 00:15:18.520 | 99.00th=[ 71], 99.50th=[ 72], 99.90th=[ 186], 99.95th=[ 190], 00:15:18.520 | 99.99th=[ 223] 00:15:18.520 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:15:18.520 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:15:18.520 lat (usec) : 50=32.61%, 100=67.29%, 250=0.10% 00:15:18.520 cpu : usr=9.10%, sys=16.60%, ctx=16239, majf=0, minf=1 00:15:18.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.520 issued rwts: total=8047,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.520 00:15:18.520 Run status group 0 (all jobs): 00:15:18.520 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=31.4MiB (33.0MB), run=1001-1001msec 00:15:18.520 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:15:18.520 00:15:18.520 Disk stats (read/write): 00:15:18.520 nvme0n1: ios=7218/7423, merge=0/0, ticks=336/325, in_queue=661, util=90.68% 00:15:18.781 14:57:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:21.327 rmmod nvme_rdma 00:15:21.327 rmmod nvme_fabrics 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1796274 ']' 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1796274 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1796274 ']' 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1796274 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1796274 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1796274' 00:15:21.327 killing process with pid 1796274 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1796274 00:15:21.327 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1796274 00:15:21.588 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.588 14:57:37 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:21.588 00:15:21.588 real 0m18.625s 00:15:21.588 user 0m57.812s 00:15:21.588 sys 0m6.751s 00:15:21.588 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.588 14:57:37 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 ************************************ 00:15:21.588 END TEST nvmf_nmic 00:15:21.588 ************************************ 00:15:21.588 14:57:37 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:21.588 14:57:37 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:15:21.588 14:57:37 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.588 14:57:37 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.588 14:57:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:21.588 ************************************ 00:15:21.588 START TEST nvmf_fio_target 00:15:21.588 ************************************ 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:15:21.588 * Looking for test storage... 00:15:21.588 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:21.588 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.589 14:57:37 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:29.818 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:29.818 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:29.818 Found net devices under 0000:98:00.0: mlx_0_0 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:29.818 Found net devices under 0000:98:00.1: mlx_0_1 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.818 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:29.819 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.819 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:29.819 altname enp152s0f0np0 00:15:29.819 altname ens817f0np0 00:15:29.819 inet 192.168.100.8/24 scope global mlx_0_0 00:15:29.819 valid_lft forever preferred_lft forever 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:29.819 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.819 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:29.819 altname enp152s0f1np1 00:15:29.819 altname ens817f1np1 00:15:29.819 inet 192.168.100.9/24 scope global mlx_0_1 00:15:29.819 valid_lft forever preferred_lft forever 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:29.819 192.168.100.9' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:29.819 192.168.100.9' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:29.819 192.168.100.9' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1802808 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1802808 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1802808 ']' 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.819 14:57:45 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.819 [2024-07-15 14:57:45.685211] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:29.819 [2024-07-15 14:57:45.685269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.819 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.819 [2024-07-15 14:57:45.752460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.819 [2024-07-15 14:57:45.819089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.819 [2024-07-15 14:57:45.819128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.819 [2024-07-15 14:57:45.819135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.819 [2024-07-15 14:57:45.819142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.819 [2024-07-15 14:57:45.819147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.819 [2024-07-15 14:57:45.819294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.819 [2024-07-15 14:57:45.819450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.819 [2024-07-15 14:57:45.819604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.819 [2024-07-15 14:57:45.819604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.762 14:57:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.762 14:57:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:30.762 14:57:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.762 14:57:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.762 14:57:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.762 14:57:46 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.762 14:57:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:30.762 [2024-07-15 14:57:46.676293] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2400200/0x24046f0) succeed. 00:15:30.762 [2024-07-15 14:57:46.689479] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2401840/0x2445d80) succeed. 00:15:31.022 14:57:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.022 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:31.022 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.282 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:31.282 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.542 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:31.542 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.542 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:31.542 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:31.802 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.062 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:32.062 14:57:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.062 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:32.062 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.323 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:32.323 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:32.585 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:32.585 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:32.585 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.845 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:32.845 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.106 14:57:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:33.106 [2024-07-15 14:57:49.082339] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:33.106 14:57:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:33.366 14:57:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:33.626 14:57:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:35.011 14:57:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:35.011 14:57:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:35.011 14:57:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.011 14:57:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:35.011 14:57:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:35.011 14:57:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:36.929 14:57:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:36.929 14:57:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:36.929 14:57:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.929 14:57:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:36.929 14:57:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.929 14:57:52 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:36.929 14:57:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:36.929 [global] 00:15:36.929 thread=1 00:15:36.929 invalidate=1 00:15:36.929 rw=write 00:15:36.929 time_based=1 00:15:36.929 runtime=1 00:15:36.929 ioengine=libaio 00:15:36.929 direct=1 00:15:36.929 bs=4096 00:15:36.929 iodepth=1 00:15:36.929 norandommap=0 00:15:36.929 numjobs=1 00:15:36.929 00:15:36.929 verify_dump=1 00:15:36.929 verify_backlog=512 00:15:36.929 verify_state_save=0 00:15:36.929 do_verify=1 00:15:36.929 verify=crc32c-intel 00:15:36.929 [job0] 00:15:36.929 filename=/dev/nvme0n1 00:15:36.929 [job1] 00:15:36.929 filename=/dev/nvme0n2 00:15:36.929 [job2] 00:15:36.929 filename=/dev/nvme0n3 00:15:36.929 [job3] 00:15:36.929 filename=/dev/nvme0n4 00:15:36.929 Could not set queue depth (nvme0n1) 00:15:36.929 Could not set queue depth (nvme0n2) 00:15:36.929 Could not set queue depth (nvme0n3) 00:15:36.929 Could not set queue depth (nvme0n4) 00:15:37.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.524 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.524 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.524 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.524 fio-3.35 00:15:37.524 Starting 4 threads 00:15:38.907 00:15:38.907 job0: (groupid=0, jobs=1): err= 0: pid=1804475: Mon Jul 15 14:57:54 2024 00:15:38.907 read: IOPS=1716, BW=6865KiB/s (7030kB/s)(6872KiB/1001msec) 00:15:38.907 slat (nsec): min=5715, max=47087, avg=20756.20, stdev=11113.10 00:15:38.907 clat (usec): min=49, max=491, avg=237.35, stdev=78.68 00:15:38.907 lat (usec): min=65, max=498, avg=258.10, stdev=79.94 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 68], 5.00th=[ 74], 10.00th=[ 108], 20.00th=[ 196], 00:15:38.907 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 260], 00:15:38.907 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 322], 95.00th=[ 363], 00:15:38.907 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 445], 99.95th=[ 494], 00:15:38.907 | 99.99th=[ 494] 00:15:38.907 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:38.907 slat (nsec): min=7894, max=69464, avg=23657.76, stdev=12183.09 00:15:38.907 clat (usec): min=58, max=496, avg=237.12, stdev=84.15 00:15:38.907 lat (usec): min=67, max=504, avg=260.78, stdev=87.85 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 64], 5.00th=[ 70], 10.00th=[ 76], 20.00th=[ 176], 00:15:38.907 | 30.00th=[ 227], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 269], 00:15:38.907 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 355], 00:15:38.907 | 99.00th=[ 396], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 478], 00:15:38.907 | 99.99th=[ 498] 00:15:38.907 bw ( KiB/s): min= 8192, max= 8192, per=16.66%, avg=8192.00, stdev= 0.00, samples=1 00:15:38.907 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:38.907 lat (usec) : 50=0.03%, 100=11.71%, 250=34.92%, 500=53.35% 00:15:38.907 cpu : usr=6.40%, sys=10.90%, ctx=3766, majf=0, minf=1 00:15:38.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 issued rwts: total=1718,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.907 job1: (groupid=0, jobs=1): err= 0: pid=1804494: Mon Jul 15 14:57:54 2024 00:15:38.907 read: IOPS=1957, BW=7828KiB/s (8016kB/s)(7836KiB/1001msec) 00:15:38.907 slat (nsec): min=5743, max=47166, avg=20039.43, stdev=10898.53 00:15:38.907 clat (usec): min=40, max=470, avg=217.90, stdev=89.81 00:15:38.907 lat (usec): min=54, max=485, avg=237.94, stdev=93.28 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 55], 5.00th=[ 69], 10.00th=[ 75], 20.00th=[ 95], 00:15:38.907 | 30.00th=[ 200], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 253], 00:15:38.907 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 355], 00:15:38.907 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 461], 99.95th=[ 469], 00:15:38.907 | 99.99th=[ 469] 00:15:38.907 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:38.907 slat (nsec): min=8058, max=53901, avg=24750.12, stdev=11930.14 00:15:38.907 clat (usec): min=49, max=454, avg=223.95, stdev=91.02 00:15:38.907 lat (usec): min=58, max=463, avg=248.70, stdev=95.54 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 62], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 87], 00:15:38.907 | 30.00th=[ 210], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 262], 00:15:38.907 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 347], 00:15:38.907 | 99.00th=[ 392], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 441], 00:15:38.907 | 99.99th=[ 453] 00:15:38.907 bw ( KiB/s): min= 9312, max= 9312, per=18.93%, avg=9312.00, stdev= 0.00, samples=1 00:15:38.907 iops : min= 2328, max= 2328, avg=2328.00, stdev= 0.00, samples=1 00:15:38.907 lat (usec) : 50=0.12%, 100=20.36%, 250=33.04%, 500=46.47% 00:15:38.907 cpu : usr=7.40%, sys=11.20%, ctx=4007, majf=0, minf=1 00:15:38.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 issued rwts: total=1959,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.907 job2: (groupid=0, jobs=1): err= 0: pid=1804521: Mon Jul 15 14:57:54 2024 00:15:38.907 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:38.907 slat (nsec): min=5969, max=64273, avg=21006.83, stdev=11717.39 00:15:38.907 clat (usec): min=52, max=445, avg=222.56, stdev=92.76 00:15:38.907 lat (usec): min=58, max=468, avg=243.56, stdev=97.21 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 58], 5.00th=[ 70], 10.00th=[ 75], 20.00th=[ 95], 00:15:38.907 | 30.00th=[ 200], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 258], 00:15:38.907 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 330], 95.00th=[ 363], 00:15:38.907 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 441], 99.95th=[ 441], 00:15:38.907 | 99.99th=[ 445] 00:15:38.907 write: IOPS=2065, BW=8264KiB/s (8462kB/s)(8272KiB/1001msec); 0 zone resets 00:15:38.907 slat (nsec): min=8113, max=68310, avg=22957.38, stdev=12975.87 00:15:38.907 clat (usec): min=50, max=481, avg=207.58, stdev=101.60 00:15:38.907 lat (usec): min=59, max=489, avg=230.54, stdev=108.33 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 74], 00:15:38.907 | 30.00th=[ 118], 40.00th=[ 217], 50.00th=[ 243], 60.00th=[ 255], 00:15:38.907 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 355], 00:15:38.907 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 453], 99.95th=[ 478], 00:15:38.907 | 99.99th=[ 482] 00:15:38.907 bw ( KiB/s): min=10800, max=10800, per=21.96%, avg=10800.00, stdev= 0.00, samples=1 00:15:38.907 iops : min= 2700, max= 2700, avg=2700.00, stdev= 0.00, samples=1 00:15:38.907 lat (usec) : 100=24.90%, 250=29.01%, 500=46.09% 00:15:38.907 cpu : usr=6.70%, sys=11.90%, ctx=4116, majf=0, minf=1 00:15:38.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 issued rwts: total=2048,2068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.907 job3: (groupid=0, jobs=1): err= 0: pid=1804533: Mon Jul 15 14:57:54 2024 00:15:38.907 read: IOPS=6038, BW=23.6MiB/s (24.7MB/s)(23.6MiB/1001msec) 00:15:38.907 slat (nsec): min=5716, max=48241, avg=7385.93, stdev=5065.67 00:15:38.907 clat (usec): min=39, max=460, avg=72.71, stdev=49.82 00:15:38.907 lat (usec): min=55, max=466, avg=80.10, stdev=53.69 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:15:38.907 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 62], 00:15:38.907 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 219], 00:15:38.907 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 396], 99.95th=[ 412], 00:15:38.907 | 99.99th=[ 461] 00:15:38.907 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:15:38.907 slat (nsec): min=7988, max=52815, avg=9475.70, stdev=4728.56 00:15:38.907 clat (usec): min=41, max=457, avg=69.75, stdev=49.58 00:15:38.907 lat (usec): min=56, max=468, avg=79.23, stdev=53.38 00:15:38.907 clat percentiles (usec): 00:15:38.907 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:15:38.907 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:15:38.907 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 69], 95.00th=[ 110], 00:15:38.907 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 404], 99.95th=[ 412], 00:15:38.907 | 99.99th=[ 457] 00:15:38.907 bw ( KiB/s): min=20480, max=20480, per=41.64%, avg=20480.00, stdev= 0.00, samples=1 00:15:38.907 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:38.907 lat (usec) : 50=0.17%, 100=94.03%, 250=2.03%, 500=3.77% 00:15:38.907 cpu : usr=7.80%, sys=14.30%, ctx=12189, majf=0, minf=1 00:15:38.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:38.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.907 issued rwts: total=6045,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:38.907 00:15:38.907 Run status group 0 (all jobs): 00:15:38.907 READ: bw=45.9MiB/s (48.2MB/s), 6865KiB/s-23.6MiB/s (7030kB/s-24.7MB/s), io=46.0MiB (48.2MB), run=1001-1001msec 00:15:38.907 WRITE: bw=48.0MiB/s (50.4MB/s), 8184KiB/s-24.0MiB/s (8380kB/s-25.1MB/s), io=48.1MiB (50.4MB), run=1001-1001msec 00:15:38.907 00:15:38.907 Disk stats (read/write): 00:15:38.907 nvme0n1: ios=1586/1554, merge=0/0, ticks=243/219, in_queue=462, util=81.56% 00:15:38.907 nvme0n2: ios=1536/1765, merge=0/0, ticks=184/227, in_queue=411, util=83.13% 00:15:38.907 nvme0n3: ios=1536/1903, merge=0/0, ticks=213/246, in_queue=459, util=87.54% 00:15:38.907 nvme0n4: ios=4608/4780, merge=0/0, ticks=291/302, in_queue=593, util=89.18% 00:15:38.907 14:57:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:38.907 [global] 00:15:38.907 thread=1 00:15:38.907 invalidate=1 00:15:38.907 rw=randwrite 00:15:38.907 time_based=1 00:15:38.907 runtime=1 00:15:38.907 ioengine=libaio 00:15:38.907 direct=1 00:15:38.907 bs=4096 00:15:38.907 iodepth=1 00:15:38.907 norandommap=0 00:15:38.907 numjobs=1 00:15:38.907 00:15:38.907 verify_dump=1 00:15:38.907 verify_backlog=512 00:15:38.907 verify_state_save=0 00:15:38.907 do_verify=1 00:15:38.907 verify=crc32c-intel 00:15:38.907 [job0] 00:15:38.908 filename=/dev/nvme0n1 00:15:38.908 [job1] 00:15:38.908 filename=/dev/nvme0n2 00:15:38.908 [job2] 00:15:38.908 filename=/dev/nvme0n3 00:15:38.908 [job3] 00:15:38.908 filename=/dev/nvme0n4 00:15:38.908 Could not set queue depth (nvme0n1) 00:15:38.908 Could not set queue depth (nvme0n2) 00:15:38.908 Could not set queue depth (nvme0n3) 00:15:38.908 Could not set queue depth (nvme0n4) 00:15:39.168 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.168 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.168 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.168 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.168 fio-3.35 00:15:39.168 Starting 4 threads 00:15:40.553 00:15:40.553 job0: (groupid=0, jobs=1): err= 0: pid=1804953: Mon Jul 15 14:57:56 2024 00:15:40.553 read: IOPS=4623, BW=18.1MiB/s (18.9MB/s)(18.1MiB/1001msec) 00:15:40.553 slat (nsec): min=5483, max=54159, avg=9276.50, stdev=7516.26 00:15:40.553 clat (usec): min=35, max=413, avg=90.17, stdev=61.27 00:15:40.553 lat (usec): min=51, max=443, avg=99.45, stdev=66.36 00:15:40.553 clat percentiles (usec): 00:15:40.553 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:15:40.553 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 70], 00:15:40.553 | 70.00th=[ 95], 80.00th=[ 115], 90.00th=[ 200], 95.00th=[ 239], 00:15:40.553 | 99.00th=[ 310], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 396], 00:15:40.553 | 99.99th=[ 416] 00:15:40.553 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:15:40.553 slat (nsec): min=7683, max=58792, avg=11206.62, stdev=7518.82 00:15:40.553 clat (usec): min=43, max=461, avg=88.82, stdev=62.84 00:15:40.553 lat (usec): min=51, max=470, avg=100.03, stdev=68.09 00:15:40.553 clat percentiles (usec): 00:15:40.553 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 52], 00:15:40.553 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 67], 00:15:40.553 | 70.00th=[ 92], 80.00th=[ 115], 90.00th=[ 196], 95.00th=[ 243], 00:15:40.553 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 408], 99.95th=[ 445], 00:15:40.553 | 99.99th=[ 461] 00:15:40.553 bw ( KiB/s): min=27656, max=27656, per=48.11%, avg=27656.00, stdev= 0.00, samples=1 00:15:40.553 iops : min= 6914, max= 6914, avg=6914.00, stdev= 0.00, samples=1 00:15:40.553 lat (usec) : 50=9.68%, 100=62.81%, 250=23.68%, 500=3.83% 00:15:40.553 cpu : usr=7.90%, sys=13.50%, ctx=9748, majf=0, minf=1 00:15:40.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.553 issued rwts: total=4628,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.553 job1: (groupid=0, jobs=1): err= 0: pid=1804958: Mon Jul 15 14:57:56 2024 00:15:40.553 read: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:15:40.553 slat (nsec): min=5276, max=55094, avg=16041.10, stdev=11511.89 00:15:40.553 clat (usec): min=50, max=513, avg=165.31, stdev=86.79 00:15:40.553 lat (usec): min=56, max=524, avg=181.35, stdev=93.57 00:15:40.553 clat percentiles (usec): 00:15:40.553 | 1.00th=[ 65], 5.00th=[ 73], 10.00th=[ 79], 20.00th=[ 91], 00:15:40.553 | 30.00th=[ 102], 40.00th=[ 113], 50.00th=[ 120], 60.00th=[ 159], 00:15:40.553 | 70.00th=[ 229], 80.00th=[ 253], 90.00th=[ 289], 95.00th=[ 326], 00:15:40.553 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 469], 99.95th=[ 494], 00:15:40.553 | 99.99th=[ 515] 00:15:40.553 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:40.553 slat (nsec): min=7663, max=60076, avg=14395.53, stdev=9828.49 00:15:40.553 clat (usec): min=48, max=475, avg=131.04, stdev=73.61 00:15:40.553 lat (usec): min=56, max=508, avg=145.43, stdev=80.01 00:15:40.553 clat percentiles (usec): 00:15:40.553 | 1.00th=[ 61], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 77], 00:15:40.553 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 114], 00:15:40.553 | 70.00th=[ 121], 80.00th=[ 206], 90.00th=[ 253], 95.00th=[ 285], 00:15:40.553 | 99.00th=[ 363], 99.50th=[ 392], 99.90th=[ 420], 99.95th=[ 465], 00:15:40.553 | 99.99th=[ 478] 00:15:40.553 bw ( KiB/s): min=12288, max=12288, per=21.37%, avg=12288.00, stdev= 0.00, samples=1 00:15:40.553 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:40.553 lat (usec) : 50=0.08%, 100=36.14%, 250=48.18%, 500=15.58%, 750=0.02% 00:15:40.553 cpu : usr=6.00%, sys=13.10%, ctx=5982, majf=0, minf=1 00:15:40.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.553 issued rwts: total=2910,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.553 job2: (groupid=0, jobs=1): err= 0: pid=1804963: Mon Jul 15 14:57:56 2024 00:15:40.553 read: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:15:40.553 slat (nsec): min=5541, max=66545, avg=14062.54, stdev=11233.80 00:15:40.553 clat (usec): min=51, max=479, avg=145.90, stdev=80.52 00:15:40.553 lat (usec): min=57, max=509, avg=159.96, stdev=87.69 00:15:40.553 clat percentiles (usec): 00:15:40.553 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 79], 00:15:40.553 | 30.00th=[ 88], 40.00th=[ 101], 50.00th=[ 115], 60.00th=[ 123], 00:15:40.553 | 70.00th=[ 200], 80.00th=[ 233], 90.00th=[ 262], 95.00th=[ 285], 00:15:40.553 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 461], 99.95th=[ 478], 00:15:40.553 | 99.99th=[ 482] 00:15:40.553 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:40.553 slat (nsec): min=7748, max=83231, avg=16780.56, stdev=11712.24 00:15:40.553 clat (usec): min=49, max=462, avg=147.81, stdev=81.62 00:15:40.553 lat (usec): min=58, max=483, avg=164.59, stdev=89.20 00:15:40.553 clat percentiles (usec): 00:15:40.553 | 1.00th=[ 56], 5.00th=[ 60], 10.00th=[ 65], 20.00th=[ 78], 00:15:40.553 | 30.00th=[ 87], 40.00th=[ 102], 50.00th=[ 114], 60.00th=[ 124], 00:15:40.553 | 70.00th=[ 206], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 293], 00:15:40.553 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 437], 99.95th=[ 449], 00:15:40.553 | 99.99th=[ 461] 00:15:40.553 bw ( KiB/s): min=12288, max=12288, per=21.37%, avg=12288.00, stdev= 0.00, samples=1 00:15:40.553 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:40.553 lat (usec) : 50=0.02%, 100=39.20%, 250=47.17%, 500=13.62% 00:15:40.553 cpu : usr=5.90%, sys=13.30%, ctx=6009, majf=0, minf=1 00:15:40.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.553 issued rwts: total=2936,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.553 job3: (groupid=0, jobs=1): err= 0: pid=1804969: Mon Jul 15 14:57:56 2024 00:15:40.553 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:15:40.553 slat (nsec): min=5543, max=54250, avg=13017.89, stdev=10167.67 00:15:40.553 clat (usec): min=53, max=774, avg=137.74, stdev=75.04 00:15:40.553 lat (usec): min=59, max=781, avg=150.75, stdev=81.32 00:15:40.553 clat percentiles (usec): 00:15:40.553 | 1.00th=[ 58], 5.00th=[ 65], 10.00th=[ 74], 20.00th=[ 84], 00:15:40.553 | 30.00th=[ 92], 40.00th=[ 100], 50.00th=[ 111], 60.00th=[ 117], 00:15:40.553 | 70.00th=[ 128], 80.00th=[ 217], 90.00th=[ 255], 95.00th=[ 285], 00:15:40.553 | 99.00th=[ 359], 99.50th=[ 392], 99.90th=[ 474], 99.95th=[ 494], 00:15:40.554 | 99.99th=[ 775] 00:15:40.554 write: IOPS=3119, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:15:40.554 slat (nsec): min=7904, max=54723, avg=15794.23, stdev=10405.40 00:15:40.554 clat (usec): min=50, max=462, avg=148.25, stdev=82.85 00:15:40.554 lat (usec): min=58, max=475, avg=164.04, stdev=89.12 00:15:40.554 clat percentiles (usec): 00:15:40.554 | 1.00th=[ 56], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 82], 00:15:40.554 | 30.00th=[ 90], 40.00th=[ 101], 50.00th=[ 112], 60.00th=[ 120], 00:15:40.554 | 70.00th=[ 202], 80.00th=[ 239], 90.00th=[ 273], 95.00th=[ 306], 00:15:40.554 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 433], 99.95th=[ 453], 00:15:40.554 | 99.99th=[ 461] 00:15:40.554 bw ( KiB/s): min=12288, max=12288, per=21.37%, avg=12288.00, stdev= 0.00, samples=1 00:15:40.554 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:40.554 lat (usec) : 100=39.64%, 250=46.55%, 500=13.79%, 1000=0.02% 00:15:40.554 cpu : usr=6.20%, sys=12.50%, ctx=6195, majf=0, minf=1 00:15:40.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.554 issued rwts: total=3072,3123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.554 00:15:40.554 Run status group 0 (all jobs): 00:15:40.554 READ: bw=52.9MiB/s (55.4MB/s), 11.4MiB/s-18.1MiB/s (11.9MB/s-18.9MB/s), io=52.9MiB (55.5MB), run=1001-1001msec 00:15:40.554 WRITE: bw=56.1MiB/s (58.9MB/s), 12.0MiB/s-20.0MiB/s (12.6MB/s-20.9MB/s), io=56.2MiB (58.9MB), run=1001-1001msec 00:15:40.554 00:15:40.554 Disk stats (read/write): 00:15:40.554 nvme0n1: ios=4152/4608, merge=0/0, ticks=321/319, in_queue=640, util=85.77% 00:15:40.554 nvme0n2: ios=2383/2560, merge=0/0, ticks=310/274, in_queue=584, util=86.18% 00:15:40.554 nvme0n3: ios=2470/2560, merge=0/0, ticks=252/244, in_queue=496, util=88.70% 00:15:40.554 nvme0n4: ios=2560/2621, merge=0/0, ticks=293/310, in_queue=603, util=89.64% 00:15:40.554 14:57:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:40.554 [global] 00:15:40.554 thread=1 00:15:40.554 invalidate=1 00:15:40.554 rw=write 00:15:40.554 time_based=1 00:15:40.554 runtime=1 00:15:40.554 ioengine=libaio 00:15:40.554 direct=1 00:15:40.554 bs=4096 00:15:40.554 iodepth=128 00:15:40.554 norandommap=0 00:15:40.554 numjobs=1 00:15:40.554 00:15:40.554 verify_dump=1 00:15:40.554 verify_backlog=512 00:15:40.554 verify_state_save=0 00:15:40.554 do_verify=1 00:15:40.554 verify=crc32c-intel 00:15:40.554 [job0] 00:15:40.554 filename=/dev/nvme0n1 00:15:40.554 [job1] 00:15:40.554 filename=/dev/nvme0n2 00:15:40.554 [job2] 00:15:40.554 filename=/dev/nvme0n3 00:15:40.554 [job3] 00:15:40.554 filename=/dev/nvme0n4 00:15:40.554 Could not set queue depth (nvme0n1) 00:15:40.554 Could not set queue depth (nvme0n2) 00:15:40.554 Could not set queue depth (nvme0n3) 00:15:40.554 Could not set queue depth (nvme0n4) 00:15:40.815 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.815 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.815 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.815 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:40.815 fio-3.35 00:15:40.815 Starting 4 threads 00:15:42.227 00:15:42.227 job0: (groupid=0, jobs=1): err= 0: pid=1805468: Mon Jul 15 14:57:57 2024 00:15:42.227 read: IOPS=14.3k, BW=55.9MiB/s (58.7MB/s)(56.0MiB/1001msec) 00:15:42.227 slat (nsec): min=1158, max=1250.3k, avg=33993.83, stdev=123104.17 00:15:42.227 clat (usec): min=3176, max=5596, avg=4493.16, stdev=414.86 00:15:42.227 lat (usec): min=3260, max=5663, avg=4527.16, stdev=407.47 00:15:42.227 clat percentiles (usec): 00:15:42.227 | 1.00th=[ 3490], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 4113], 00:15:42.227 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4686], 00:15:42.227 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5080], 00:15:42.227 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 5538], 99.95th=[ 5538], 00:15:42.227 | 99.99th=[ 5538] 00:15:42.227 write: IOPS=14.5k, BW=56.8MiB/s (59.5MB/s)(56.8MiB/1001msec); 0 zone resets 00:15:42.227 slat (nsec): min=1671, max=1076.9k, avg=33024.48, stdev=115471.04 00:15:42.227 clat (usec): min=387, max=5444, avg=4295.02, stdev=438.66 00:15:42.227 lat (usec): min=1304, max=5445, avg=4328.04, stdev=433.61 00:15:42.227 clat percentiles (usec): 00:15:42.227 | 1.00th=[ 3261], 5.00th=[ 3523], 10.00th=[ 3687], 20.00th=[ 3916], 00:15:42.227 | 30.00th=[ 4113], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:15:42.227 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 4883], 00:15:42.227 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5342], 99.95th=[ 5342], 00:15:42.227 | 99.99th=[ 5473] 00:15:42.227 bw ( KiB/s): min=53888, max=61440, per=42.01%, avg=57664.00, stdev=5340.07, samples=2 00:15:42.227 iops : min=13472, max=15360, avg=14416.00, stdev=1335.02, samples=2 00:15:42.227 lat (usec) : 500=0.01% 00:15:42.227 lat (msec) : 2=0.11%, 4=18.76%, 10=81.13% 00:15:42.227 cpu : usr=4.60%, sys=9.50%, ctx=2294, majf=0, minf=1 00:15:42.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:42.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.227 issued rwts: total=14336,14543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.227 job1: (groupid=0, jobs=1): err= 0: pid=1805482: Mon Jul 15 14:57:57 2024 00:15:42.227 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:15:42.227 slat (nsec): min=1183, max=3351.8k, avg=79964.57, stdev=250492.12 00:15:42.227 clat (usec): min=5043, max=19316, avg=10363.88, stdev=3267.72 00:15:42.227 lat (usec): min=5416, max=19318, avg=10443.85, stdev=3291.10 00:15:42.227 clat percentiles (usec): 00:15:42.227 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6587], 00:15:42.227 | 30.00th=[ 6652], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:15:42.227 | 70.00th=[11863], 80.00th=[12387], 90.00th=[15008], 95.00th=[16450], 00:15:42.227 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[19268], 00:15:42.227 | 99.99th=[19268] 00:15:42.227 write: IOPS=6431, BW=25.1MiB/s (26.3MB/s)(25.2MiB/1002msec); 0 zone resets 00:15:42.227 slat (nsec): min=1667, max=2677.7k, avg=76252.32, stdev=238533.85 00:15:42.227 clat (usec): min=1753, max=17173, avg=9828.78, stdev=3344.42 00:15:42.227 lat (usec): min=2473, max=17687, avg=9905.03, stdev=3367.27 00:15:42.227 clat percentiles (usec): 00:15:42.227 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 6259], 00:15:42.227 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[10814], 60.00th=[11207], 00:15:42.227 | 70.00th=[11600], 80.00th=[12125], 90.00th=[14877], 95.00th=[15401], 00:15:42.227 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:15:42.227 | 99.99th=[17171] 00:15:42.227 bw ( KiB/s): min=21864, max=28672, per=18.41%, avg=25268.00, stdev=4813.98, samples=2 00:15:42.227 iops : min= 5466, max= 7168, avg=6317.00, stdev=1203.50, samples=2 00:15:42.227 lat (msec) : 2=0.01%, 4=0.14%, 10=38.70%, 20=61.16% 00:15:42.227 cpu : usr=1.80%, sys=4.20%, ctx=1370, majf=0, minf=1 00:15:42.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:42.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.227 issued rwts: total=6144,6444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.227 job2: (groupid=0, jobs=1): err= 0: pid=1805496: Mon Jul 15 14:57:57 2024 00:15:42.227 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:15:42.227 slat (nsec): min=1222, max=2201.0k, avg=80759.07, stdev=231333.36 00:15:42.227 clat (usec): min=8172, max=14729, avg=10442.61, stdev=1293.95 00:15:42.227 lat (usec): min=8357, max=14738, avg=10523.36, stdev=1306.53 00:15:42.227 clat percentiles (usec): 00:15:42.227 | 1.00th=[ 8586], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9241], 00:15:42.227 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10683], 00:15:42.227 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12387], 95.00th=[12649], 00:15:42.227 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13960], 99.95th=[14222], 00:15:42.227 | 99.99th=[14746] 00:15:42.227 write: IOPS=6245, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec); 0 zone resets 00:15:42.227 slat (nsec): min=1720, max=2095.6k, avg=77897.06, stdev=219488.25 00:15:42.227 clat (usec): min=2017, max=13687, avg=10007.67, stdev=1268.32 00:15:42.227 lat (usec): min=2024, max=13690, avg=10085.57, stdev=1279.00 00:15:42.227 clat percentiles (usec): 00:15:42.227 | 1.00th=[ 6390], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 8979], 00:15:42.227 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10290], 00:15:42.227 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[11994], 00:15:42.227 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13435], 99.95th=[13698], 00:15:42.227 | 99.99th=[13698] 00:15:42.228 bw ( KiB/s): min=23424, max=25728, per=17.90%, avg=24576.00, stdev=1629.17, samples=2 00:15:42.228 iops : min= 5856, max= 6432, avg=6144.00, stdev=407.29, samples=2 00:15:42.228 lat (msec) : 4=0.27%, 10=54.32%, 20=45.41% 00:15:42.228 cpu : usr=3.09%, sys=4.49%, ctx=1497, majf=0, minf=1 00:15:42.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:42.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.228 issued rwts: total=6144,6264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.228 job3: (groupid=0, jobs=1): err= 0: pid=1805508: Mon Jul 15 14:57:57 2024 00:15:42.228 read: IOPS=6647, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:15:42.228 slat (nsec): min=1199, max=1636.9k, avg=72903.25, stdev=232664.19 00:15:42.228 clat (usec): min=1246, max=15803, avg=9356.35, stdev=2196.91 00:15:42.228 lat (usec): min=2130, max=15812, avg=9429.25, stdev=2210.72 00:15:42.228 clat percentiles (usec): 00:15:42.228 | 1.00th=[ 6718], 5.00th=[ 7570], 10.00th=[ 7635], 20.00th=[ 7767], 00:15:42.228 | 30.00th=[ 7832], 40.00th=[ 7898], 50.00th=[ 7963], 60.00th=[ 8094], 00:15:42.228 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12518], 95.00th=[13042], 00:15:42.228 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15533], 99.95th=[15533], 00:15:42.228 | 99.99th=[15795] 00:15:42.228 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:15:42.228 slat (nsec): min=1682, max=3892.6k, avg=69649.45, stdev=234025.17 00:15:42.228 clat (usec): min=2140, max=15733, avg=8996.73, stdev=2172.14 00:15:42.228 lat (usec): min=2142, max=17061, avg=9066.38, stdev=2187.79 00:15:42.228 clat percentiles (usec): 00:15:42.228 | 1.00th=[ 5997], 5.00th=[ 7308], 10.00th=[ 7439], 20.00th=[ 7504], 00:15:42.228 | 30.00th=[ 7570], 40.00th=[ 7635], 50.00th=[ 7701], 60.00th=[ 7832], 00:15:42.228 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11863], 95.00th=[13566], 00:15:42.228 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15270], 99.95th=[15270], 00:15:42.228 | 99.99th=[15795] 00:15:42.228 bw ( KiB/s): min=23184, max=33176, per=20.53%, avg=28180.00, stdev=7065.41, samples=2 00:15:42.228 iops : min= 5796, max= 8294, avg=7045.00, stdev=1766.35, samples=2 00:15:42.228 lat (msec) : 2=0.01%, 4=0.30%, 10=64.10%, 20=35.58% 00:15:42.228 cpu : usr=2.90%, sys=4.10%, ctx=1366, majf=0, minf=1 00:15:42.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:42.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.228 issued rwts: total=6661,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.228 00:15:42.228 Run status group 0 (all jobs): 00:15:42.228 READ: bw=130MiB/s (136MB/s), 23.9MiB/s-55.9MiB/s (25.1MB/s-58.7MB/s), io=130MiB (136MB), run=1001-1003msec 00:15:42.228 WRITE: bw=134MiB/s (141MB/s), 24.4MiB/s-56.8MiB/s (25.6MB/s-59.5MB/s), io=134MiB (141MB), run=1001-1003msec 00:15:42.228 00:15:42.228 Disk stats (read/write): 00:15:42.228 nvme0n1: ios=11746/11776, merge=0/0, ticks=11556/10974, in_queue=22530, util=82.06% 00:15:42.228 nvme0n2: ios=5120/5307, merge=0/0, ticks=15973/15925, in_queue=31898, util=82.79% 00:15:42.228 nvme0n3: ios=4608/5116, merge=0/0, ticks=15783/16546, in_queue=32329, util=87.45% 00:15:42.228 nvme0n4: ios=5120/5456, merge=0/0, ticks=15901/16021, in_queue=31922, util=89.18% 00:15:42.228 14:57:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:42.228 [global] 00:15:42.228 thread=1 00:15:42.228 invalidate=1 00:15:42.228 rw=randwrite 00:15:42.228 time_based=1 00:15:42.228 runtime=1 00:15:42.228 ioengine=libaio 00:15:42.228 direct=1 00:15:42.228 bs=4096 00:15:42.228 iodepth=128 00:15:42.228 norandommap=0 00:15:42.228 numjobs=1 00:15:42.228 00:15:42.228 verify_dump=1 00:15:42.228 verify_backlog=512 00:15:42.228 verify_state_save=0 00:15:42.228 do_verify=1 00:15:42.228 verify=crc32c-intel 00:15:42.228 [job0] 00:15:42.228 filename=/dev/nvme0n1 00:15:42.228 [job1] 00:15:42.228 filename=/dev/nvme0n2 00:15:42.228 [job2] 00:15:42.228 filename=/dev/nvme0n3 00:15:42.228 [job3] 00:15:42.228 filename=/dev/nvme0n4 00:15:42.228 Could not set queue depth (nvme0n1) 00:15:42.228 Could not set queue depth (nvme0n2) 00:15:42.228 Could not set queue depth (nvme0n3) 00:15:42.228 Could not set queue depth (nvme0n4) 00:15:42.489 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.489 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.489 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.489 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.489 fio-3.35 00:15:42.489 Starting 4 threads 00:15:43.874 00:15:43.874 job0: (groupid=0, jobs=1): err= 0: pid=1805991: Mon Jul 15 14:57:59 2024 00:15:43.874 read: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1005msec) 00:15:43.874 slat (nsec): min=1207, max=3409.9k, avg=170339.96, stdev=478310.35 00:15:43.874 clat (usec): min=4149, max=28145, avg=21384.32, stdev=1872.77 00:15:43.874 lat (usec): min=6695, max=28146, avg=21554.66, stdev=1849.42 00:15:43.874 clat percentiles (usec): 00:15:43.874 | 1.00th=[ 8848], 5.00th=[20055], 10.00th=[20841], 20.00th=[21103], 00:15:43.874 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21627], 60.00th=[21890], 00:15:43.874 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676], 00:15:43.874 | 99.00th=[23725], 99.50th=[24249], 99.90th=[26346], 99.95th=[26346], 00:15:43.874 | 99.99th=[28181] 00:15:43.874 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:15:43.874 slat (nsec): min=1628, max=3525.1k, avg=164061.74, stdev=473411.71 00:15:43.874 clat (usec): min=13112, max=25203, avg=21332.85, stdev=834.04 00:15:43.874 lat (usec): min=13122, max=25212, avg=21496.91, stdev=803.53 00:15:43.874 clat percentiles (usec): 00:15:43.874 | 1.00th=[17957], 5.00th=[20317], 10.00th=[20579], 20.00th=[20841], 00:15:43.874 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:15:43.874 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22414], 00:15:43.874 | 99.00th=[22938], 99.50th=[23200], 99.90th=[24249], 99.95th=[24511], 00:15:43.874 | 99.99th=[25297] 00:15:43.874 bw ( KiB/s): min=12288, max=12288, per=11.92%, avg=12288.00, stdev= 0.00, samples=2 00:15:43.874 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:15:43.874 lat (msec) : 10=0.56%, 20=3.67%, 50=95.78% 00:15:43.874 cpu : usr=1.00%, sys=2.39%, ctx=957, majf=0, minf=1 00:15:43.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:43.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.874 issued rwts: total=2873,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.874 job1: (groupid=0, jobs=1): err= 0: pid=1806000: Mon Jul 15 14:57:59 2024 00:15:43.874 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:15:43.874 slat (nsec): min=1186, max=1707.7k, avg=95535.42, stdev=309200.75 00:15:43.875 clat (usec): min=10325, max=12655, avg=12227.94, stdev=343.31 00:15:43.875 lat (usec): min=11764, max=12657, avg=12323.48, stdev=155.64 00:15:43.875 clat percentiles (usec): 00:15:43.875 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:15:43.875 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:15:43.875 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12518], 00:15:43.875 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12649], 99.95th=[12649], 00:15:43.875 | 99.99th=[12649] 00:15:43.875 write: IOPS=5504, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1003msec); 0 zone resets 00:15:43.875 slat (nsec): min=1635, max=1586.5k, avg=90990.82, stdev=291423.91 00:15:43.875 clat (usec): min=2071, max=13592, avg=11623.64, stdev=857.57 00:15:43.875 lat (usec): min=2662, max=13601, avg=11714.63, stdev=808.28 00:15:43.875 clat percentiles (usec): 00:15:43.875 | 1.00th=[ 7046], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:15:43.875 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:15:43.875 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:15:43.875 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:15:43.875 | 99.99th=[13566] 00:15:43.875 bw ( KiB/s): min=21152, max=22000, per=20.94%, avg=21576.00, stdev=599.63, samples=2 00:15:43.875 iops : min= 5288, max= 5500, avg=5394.00, stdev=149.91, samples=2 00:15:43.875 lat (msec) : 4=0.16%, 10=0.64%, 20=99.20% 00:15:43.875 cpu : usr=1.60%, sys=2.30%, ctx=2169, majf=0, minf=1 00:15:43.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:43.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.875 issued rwts: total=5120,5521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.875 job2: (groupid=0, jobs=1): err= 0: pid=1806008: Mon Jul 15 14:57:59 2024 00:15:43.875 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:15:43.875 slat (nsec): min=1207, max=1100.9k, avg=95215.27, stdev=247788.21 00:15:43.875 clat (usec): min=10983, max=12698, avg=12237.51, stdev=293.42 00:15:43.875 lat (usec): min=11639, max=12700, avg=12332.72, stdev=162.83 00:15:43.875 clat percentiles (usec): 00:15:43.875 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:15:43.875 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:15:43.875 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12518], 00:15:43.875 | 99.00th=[12649], 99.50th=[12649], 99.90th=[12649], 99.95th=[12649], 00:15:43.875 | 99.99th=[12649] 00:15:43.875 write: IOPS=5504, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1003msec); 0 zone resets 00:15:43.875 slat (nsec): min=1640, max=1246.3k, avg=90662.51, stdev=235093.58 00:15:43.875 clat (usec): min=1984, max=13595, avg=11619.52, stdev=801.64 00:15:43.875 lat (usec): min=2685, max=13603, avg=11710.18, stdev=767.26 00:15:43.875 clat percentiles (usec): 00:15:43.875 | 1.00th=[ 7046], 5.00th=[10945], 10.00th=[11076], 20.00th=[11600], 00:15:43.875 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:15:43.875 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:15:43.875 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13566], 99.95th=[13566], 00:15:43.875 | 99.99th=[13566] 00:15:43.875 bw ( KiB/s): min=21192, max=21960, per=20.94%, avg=21576.00, stdev=543.06, samples=2 00:15:43.875 iops : min= 5298, max= 5490, avg=5394.00, stdev=135.76, samples=2 00:15:43.875 lat (msec) : 2=0.01%, 4=0.15%, 10=0.75%, 20=99.09% 00:15:43.875 cpu : usr=1.60%, sys=4.59%, ctx=1757, majf=0, minf=1 00:15:43.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:43.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.875 issued rwts: total=5120,5521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.875 job3: (groupid=0, jobs=1): err= 0: pid=1806009: Mon Jul 15 14:57:59 2024 00:15:43.875 read: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(44.4MiB/1005msec) 00:15:43.875 slat (nsec): min=1250, max=1277.4k, avg=42803.33, stdev=155337.68 00:15:43.875 clat (usec): min=4460, max=9335, avg=5626.66, stdev=280.97 00:15:43.875 lat (usec): min=4564, max=9346, avg=5669.46, stdev=292.83 00:15:43.875 clat percentiles (usec): 00:15:43.875 | 1.00th=[ 5014], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5473], 00:15:43.875 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5604], 60.00th=[ 5669], 00:15:43.875 | 70.00th=[ 5735], 80.00th=[ 5735], 90.00th=[ 5800], 95.00th=[ 5997], 00:15:43.875 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 8291], 99.95th=[ 9241], 00:15:43.875 | 99.99th=[ 9372] 00:15:43.875 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(46.0MiB/1005msec); 0 zone resets 00:15:43.875 slat (nsec): min=1671, max=1540.8k, avg=41691.55, stdev=150238.30 00:15:43.875 clat (usec): min=2281, max=9920, avg=5392.97, stdev=355.50 00:15:43.875 lat (usec): min=2290, max=9928, avg=5434.66, stdev=366.83 00:15:43.875 clat percentiles (usec): 00:15:43.875 | 1.00th=[ 4424], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5276], 00:15:43.875 | 30.00th=[ 5342], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:15:43.875 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5604], 95.00th=[ 5800], 00:15:43.875 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 9372], 99.95th=[ 9503], 00:15:43.875 | 99.99th=[ 9896] 00:15:43.875 bw ( KiB/s): min=46992, max=47072, per=45.64%, avg=47032.00, stdev=56.57, samples=2 00:15:43.875 iops : min=11748, max=11768, avg=11758.00, stdev=14.14, samples=2 00:15:43.875 lat (msec) : 4=0.33%, 10=99.67% 00:15:43.875 cpu : usr=3.98%, sys=6.08%, ctx=1655, majf=0, minf=1 00:15:43.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:43.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.875 issued rwts: total=11373,11776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.875 00:15:43.875 Run status group 0 (all jobs): 00:15:43.875 READ: bw=95.2MiB/s (99.8MB/s), 11.2MiB/s-44.2MiB/s (11.7MB/s-46.4MB/s), io=95.6MiB (100MB), run=1003-1005msec 00:15:43.875 WRITE: bw=101MiB/s (106MB/s), 11.9MiB/s-45.8MiB/s (12.5MB/s-48.0MB/s), io=101MiB (106MB), run=1003-1005msec 00:15:43.875 00:15:43.875 Disk stats (read/write): 00:15:43.875 nvme0n1: ios=2487/2560, merge=0/0, ticks=13019/13273, in_queue=26292, util=85.67% 00:15:43.875 nvme0n2: ios=4352/4608, merge=0/0, ticks=13359/13424, in_queue=26783, util=85.98% 00:15:43.875 nvme0n3: ios=4352/4608, merge=0/0, ticks=12954/13013, in_queue=25967, util=88.59% 00:15:43.875 nvme0n4: ios=9727/9728, merge=0/0, ticks=52963/50890, in_queue=103853, util=89.52% 00:15:43.875 14:57:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:43.875 14:57:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1806317 00:15:43.875 14:57:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:43.875 14:57:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:43.875 [global] 00:15:43.875 thread=1 00:15:43.875 invalidate=1 00:15:43.875 rw=read 00:15:43.875 time_based=1 00:15:43.875 runtime=10 00:15:43.875 ioengine=libaio 00:15:43.875 direct=1 00:15:43.875 bs=4096 00:15:43.875 iodepth=1 00:15:43.875 norandommap=1 00:15:43.875 numjobs=1 00:15:43.875 00:15:43.875 [job0] 00:15:43.875 filename=/dev/nvme0n1 00:15:43.875 [job1] 00:15:43.875 filename=/dev/nvme0n2 00:15:43.875 [job2] 00:15:43.875 filename=/dev/nvme0n3 00:15:43.875 [job3] 00:15:43.875 filename=/dev/nvme0n4 00:15:43.875 Could not set queue depth (nvme0n1) 00:15:43.875 Could not set queue depth (nvme0n2) 00:15:43.875 Could not set queue depth (nvme0n3) 00:15:43.875 Could not set queue depth (nvme0n4) 00:15:44.137 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.137 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.137 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.137 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.137 fio-3.35 00:15:44.137 Starting 4 threads 00:15:46.679 14:58:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:46.679 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=62795776, buflen=4096 00:15:46.679 fio: pid=1806518, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:46.679 14:58:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:46.940 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=68571136, buflen=4096 00:15:46.940 fio: pid=1806516, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:46.940 14:58:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:46.940 14:58:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:47.201 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4681728, buflen=4096 00:15:47.201 fio: pid=1806511, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:47.201 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.201 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:47.201 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=4202496, buflen=4096 00:15:47.201 fio: pid=1806512, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:47.201 00:15:47.201 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1806511: Mon Jul 15 14:58:03 2024 00:15:47.201 read: IOPS=6021, BW=23.5MiB/s (24.7MB/s)(68.5MiB/2911msec) 00:15:47.201 slat (usec): min=5, max=17016, avg=16.70, stdev=166.10 00:15:47.201 clat (usec): min=22, max=523, avg=146.51, stdev=84.13 00:15:47.201 lat (usec): min=49, max=17205, avg=163.21, stdev=189.72 00:15:47.201 clat percentiles (usec): 00:15:47.201 | 1.00th=[ 50], 5.00th=[ 62], 10.00th=[ 67], 20.00th=[ 73], 00:15:47.201 | 30.00th=[ 82], 40.00th=[ 96], 50.00th=[ 109], 60.00th=[ 128], 00:15:47.201 | 70.00th=[ 204], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 297], 00:15:47.201 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 429], 99.95th=[ 453], 00:15:47.201 | 99.99th=[ 478] 00:15:47.201 bw ( KiB/s): min=19512, max=25280, per=20.38%, avg=22024.00, stdev=2679.84, samples=5 00:15:47.201 iops : min= 4878, max= 6320, avg=5506.00, stdev=669.96, samples=5 00:15:47.201 lat (usec) : 50=1.14%, 100=42.00%, 250=43.51%, 500=13.34%, 750=0.01% 00:15:47.201 cpu : usr=4.95%, sys=13.16%, ctx=17532, majf=0, minf=1 00:15:47.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 issued rwts: total=17528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.201 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1806512: Mon Jul 15 14:58:03 2024 00:15:47.201 read: IOPS=10.9k, BW=42.8MiB/s (44.8MB/s)(132MiB/3087msec) 00:15:47.201 slat (usec): min=5, max=10032, avg= 9.66, stdev=108.77 00:15:47.201 clat (usec): min=29, max=498, avg=80.14, stdev=58.54 00:15:47.201 lat (usec): min=49, max=10221, avg=89.80, stdev=126.86 00:15:47.201 clat percentiles (usec): 00:15:47.201 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 51], 20.00th=[ 52], 00:15:47.201 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 61], 00:15:47.201 | 70.00th=[ 69], 80.00th=[ 82], 90.00th=[ 129], 95.00th=[ 237], 00:15:47.201 | 99.00th=[ 306], 99.50th=[ 351], 99.90th=[ 404], 99.95th=[ 429], 00:15:47.201 | 99.99th=[ 461] 00:15:47.201 bw ( KiB/s): min=22280, max=63208, per=40.51%, avg=43779.20, stdev=16941.32, samples=5 00:15:47.201 iops : min= 5570, max=15802, avg=10944.80, stdev=4235.33, samples=5 00:15:47.201 lat (usec) : 50=8.71%, 100=76.79%, 250=10.75%, 500=3.75% 00:15:47.201 cpu : usr=4.50%, sys=14.61%, ctx=33802, majf=0, minf=1 00:15:47.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 issued rwts: total=33795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.201 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1806516: Mon Jul 15 14:58:03 2024 00:15:47.201 read: IOPS=6096, BW=23.8MiB/s (25.0MB/s)(65.4MiB/2746msec) 00:15:47.201 slat (usec): min=5, max=7785, avg=13.81, stdev=80.47 00:15:47.201 clat (usec): min=47, max=469, avg=147.69, stdev=79.17 00:15:47.201 lat (usec): min=56, max=8004, avg=161.50, stdev=117.26 00:15:47.201 clat percentiles (usec): 00:15:47.201 | 1.00th=[ 58], 5.00th=[ 66], 10.00th=[ 73], 20.00th=[ 82], 00:15:47.201 | 30.00th=[ 91], 40.00th=[ 100], 50.00th=[ 110], 60.00th=[ 125], 00:15:47.201 | 70.00th=[ 204], 80.00th=[ 231], 90.00th=[ 260], 95.00th=[ 281], 00:15:47.201 | 99.00th=[ 371], 99.50th=[ 400], 99.90th=[ 441], 99.95th=[ 449], 00:15:47.201 | 99.99th=[ 469] 00:15:47.201 bw ( KiB/s): min=21232, max=25336, per=22.14%, avg=23929.60, stdev=1594.24, samples=5 00:15:47.201 iops : min= 5308, max= 6334, avg=5982.40, stdev=398.56, samples=5 00:15:47.201 lat (usec) : 50=0.02%, 100=39.92%, 250=47.76%, 500=12.29% 00:15:47.201 cpu : usr=4.55%, sys=11.55%, ctx=16745, majf=0, minf=1 00:15:47.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 issued rwts: total=16742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.201 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1806518: Mon Jul 15 14:58:03 2024 00:15:47.201 read: IOPS=5956, BW=23.3MiB/s (24.4MB/s)(59.9MiB/2574msec) 00:15:47.201 slat (nsec): min=5368, max=64053, avg=14329.54, stdev=10939.21 00:15:47.201 clat (usec): min=51, max=525, avg=150.70, stdev=84.07 00:15:47.201 lat (usec): min=58, max=560, avg=165.03, stdev=91.11 00:15:47.201 clat percentiles (usec): 00:15:47.201 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 81], 00:15:47.201 | 30.00th=[ 91], 40.00th=[ 102], 50.00th=[ 111], 60.00th=[ 128], 00:15:47.201 | 70.00th=[ 208], 80.00th=[ 235], 90.00th=[ 265], 95.00th=[ 318], 00:15:47.201 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 445], 99.95th=[ 465], 00:15:47.201 | 99.99th=[ 498] 00:15:47.201 bw ( KiB/s): min=19272, max=29704, per=22.18%, avg=23972.80, stdev=4280.49, samples=5 00:15:47.201 iops : min= 4818, max= 7426, avg=5993.20, stdev=1070.12, samples=5 00:15:47.201 lat (usec) : 100=38.35%, 250=47.27%, 500=14.37%, 750=0.01% 00:15:47.201 cpu : usr=4.04%, sys=13.53%, ctx=15333, majf=0, minf=2 00:15:47.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.201 issued rwts: total=15332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.201 00:15:47.201 Run status group 0 (all jobs): 00:15:47.201 READ: bw=106MiB/s (111MB/s), 23.3MiB/s-42.8MiB/s (24.4MB/s-44.8MB/s), io=326MiB (342MB), run=2574-3087msec 00:15:47.201 00:15:47.201 Disk stats (read/write): 00:15:47.201 nvme0n1: ios=16705/0, merge=0/0, ticks=1691/0, in_queue=1691, util=93.79% 00:15:47.201 nvme0n2: ios=30325/0, merge=0/0, ticks=1974/0, in_queue=1974, util=94.50% 00:15:47.201 nvme0n3: ios=15604/0, merge=0/0, ticks=1691/0, in_queue=1691, util=96.04% 00:15:47.201 nvme0n4: ios=14039/0, merge=0/0, ticks=1445/0, in_queue=1445, util=96.07% 00:15:47.201 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.201 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:47.461 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.461 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:47.722 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.722 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:47.722 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.722 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:47.981 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.981 14:58:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:48.241 14:58:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:48.241 14:58:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 1806317 00:15:48.241 14:58:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:48.242 14:58:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:49.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:49.623 nvmf hotplug test: fio failed as expected 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:49.623 rmmod nvme_rdma 00:15:49.623 rmmod nvme_fabrics 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1802808 ']' 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1802808 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1802808 ']' 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1802808 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.623 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1802808 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1802808' 00:15:49.884 killing process with pid 1802808 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1802808 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1802808 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:49.884 00:15:49.884 real 0m28.432s 00:15:49.884 user 2m49.432s 00:15:49.884 sys 0m10.730s 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.884 14:58:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.884 ************************************ 00:15:49.884 END TEST nvmf_fio_target 00:15:49.884 ************************************ 00:15:50.144 14:58:05 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:50.144 14:58:05 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:50.144 14:58:05 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.144 14:58:05 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.144 14:58:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:50.144 ************************************ 00:15:50.144 START TEST nvmf_bdevio 00:15:50.144 ************************************ 00:15:50.144 14:58:05 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:50.144 * Looking for test storage... 00:15:50.144 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.144 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.145 14:58:06 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:58.283 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:58.284 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:58.284 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:58.284 Found net devices under 0000:98:00.0: mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:58.284 Found net devices under 0000:98:00.1: mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:58.284 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.284 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:58.284 altname enp152s0f0np0 00:15:58.284 altname ens817f0np0 00:15:58.284 inet 192.168.100.8/24 scope global mlx_0_0 00:15:58.284 valid_lft forever preferred_lft forever 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:58.284 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.284 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:58.284 altname enp152s0f1np1 00:15:58.284 altname ens817f1np1 00:15:58.284 inet 192.168.100.9/24 scope global mlx_0_1 00:15:58.284 valid_lft forever preferred_lft forever 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:58.284 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:58.285 192.168.100.9' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:58.285 192.168.100.9' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:58.285 192.168.100.9' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:58.285 14:58:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1811888 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1811888 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1811888 ']' 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.285 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:58.285 [2024-07-15 14:58:14.075922] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:58.285 [2024-07-15 14:58:14.075982] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.285 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.285 [2024-07-15 14:58:14.164930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.285 [2024-07-15 14:58:14.257865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.285 [2024-07-15 14:58:14.257928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.285 [2024-07-15 14:58:14.257936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.285 [2024-07-15 14:58:14.257943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.285 [2024-07-15 14:58:14.257949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.285 [2024-07-15 14:58:14.258117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:58.285 [2024-07-15 14:58:14.258287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:58.285 [2024-07-15 14:58:14.258497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:58.285 [2024-07-15 14:58:14.258499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.856 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.856 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:58.856 14:58:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.856 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.857 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 14:58:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.118 14:58:14 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:59.118 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.118 14:58:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 [2024-07-15 14:58:14.960111] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9a9b40/0x9ae030) succeed. 00:15:59.118 [2024-07-15 14:58:14.975694] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9ab180/0x9ef6c0) succeed. 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 Malloc0 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.118 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.379 [2024-07-15 14:58:15.187158] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:59.379 { 00:15:59.379 "params": { 00:15:59.379 "name": "Nvme$subsystem", 00:15:59.379 "trtype": "$TEST_TRANSPORT", 00:15:59.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.379 "adrfam": "ipv4", 00:15:59.379 "trsvcid": "$NVMF_PORT", 00:15:59.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.379 "hdgst": ${hdgst:-false}, 00:15:59.379 "ddgst": ${ddgst:-false} 00:15:59.379 }, 00:15:59.379 "method": "bdev_nvme_attach_controller" 00:15:59.379 } 00:15:59.379 EOF 00:15:59.379 )") 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:59.379 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:59.379 "params": { 00:15:59.379 "name": "Nvme1", 00:15:59.379 "trtype": "rdma", 00:15:59.379 "traddr": "192.168.100.8", 00:15:59.379 "adrfam": "ipv4", 00:15:59.379 "trsvcid": "4420", 00:15:59.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.379 "hdgst": false, 00:15:59.379 "ddgst": false 00:15:59.379 }, 00:15:59.379 "method": "bdev_nvme_attach_controller" 00:15:59.379 }' 00:15:59.379 [2024-07-15 14:58:15.249993] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:59.379 [2024-07-15 14:58:15.250058] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812235 ] 00:15:59.379 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.379 [2024-07-15 14:58:15.321058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:59.379 [2024-07-15 14:58:15.396488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.379 [2024-07-15 14:58:15.396665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.379 [2024-07-15 14:58:15.396668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.640 I/O targets: 00:15:59.640 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:59.640 00:15:59.640 00:15:59.640 CUnit - A unit testing framework for C - Version 2.1-3 00:15:59.640 http://cunit.sourceforge.net/ 00:15:59.640 00:15:59.640 00:15:59.640 Suite: bdevio tests on: Nvme1n1 00:15:59.640 Test: blockdev write read block ...passed 00:15:59.640 Test: blockdev write zeroes read block ...passed 00:15:59.640 Test: blockdev write zeroes read no split ...passed 00:15:59.640 Test: blockdev write zeroes read split ...passed 00:15:59.640 Test: blockdev write zeroes read split partial ...passed 00:15:59.640 Test: blockdev reset ...[2024-07-15 14:58:15.614977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:59.640 [2024-07-15 14:58:15.644545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:59.640 [2024-07-15 14:58:15.685754] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:59.640 passed 00:15:59.640 Test: blockdev write read 8 blocks ...passed 00:15:59.640 Test: blockdev write read size > 128k ...passed 00:15:59.640 Test: blockdev write read invalid size ...passed 00:15:59.640 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:59.640 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:59.640 Test: blockdev write read max offset ...passed 00:15:59.640 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:59.640 Test: blockdev writev readv 8 blocks ...passed 00:15:59.640 Test: blockdev writev readv 30 x 1block ...passed 00:15:59.640 Test: blockdev writev readv block ...passed 00:15:59.640 Test: blockdev writev readv size > 128k ...passed 00:15:59.640 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:59.640 Test: blockdev comparev and writev ...[2024-07-15 14:58:15.690888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.690912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.690922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.690928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.691100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.691106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.691112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.691118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.691290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.691297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.691303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.691308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.691455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.691461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.691468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:59.640 [2024-07-15 14:58:15.691473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:59.640 passed 00:15:59.640 Test: blockdev nvme passthru rw ...passed 00:15:59.640 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:58:15.692326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:59.640 [2024-07-15 14:58:15.692333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.692375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:59.640 [2024-07-15 14:58:15.692381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:59.640 [2024-07-15 14:58:15.692422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:59.641 [2024-07-15 14:58:15.692428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:59.641 [2024-07-15 14:58:15.692468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:59.641 [2024-07-15 14:58:15.692473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:59.641 passed 00:15:59.641 Test: blockdev nvme admin passthru ...passed 00:15:59.641 Test: blockdev copy ...passed 00:15:59.641 00:15:59.641 Run Summary: Type Total Ran Passed Failed Inactive 00:15:59.641 suites 1 1 n/a 0 0 00:15:59.641 tests 23 23 23 0 0 00:15:59.641 asserts 152 152 152 0 n/a 00:15:59.641 00:15:59.641 Elapsed time = 0.237 seconds 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:59.901 rmmod nvme_rdma 00:15:59.901 rmmod nvme_fabrics 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1811888 ']' 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1811888 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1811888 ']' 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1811888 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.901 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1811888 00:16:00.162 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:00.162 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:00.162 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1811888' 00:16:00.162 killing process with pid 1811888 00:16:00.162 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1811888 00:16:00.162 14:58:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1811888 00:16:00.423 14:58:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.423 14:58:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:00.423 00:16:00.423 real 0m10.288s 00:16:00.423 user 0m11.303s 00:16:00.423 sys 0m6.493s 00:16:00.423 14:58:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.423 14:58:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:00.423 ************************************ 00:16:00.423 END TEST nvmf_bdevio 00:16:00.423 ************************************ 00:16:00.423 14:58:16 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:16:00.423 14:58:16 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:00.423 14:58:16 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:00.423 14:58:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.423 14:58:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:00.423 ************************************ 00:16:00.423 START TEST nvmf_auth_target 00:16:00.423 ************************************ 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:00.423 * Looking for test storage... 00:16:00.423 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.423 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.683 14:58:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:08.820 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.820 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:08.820 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:08.821 Found net devices under 0000:98:00.0: mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:08.821 Found net devices under 0000:98:00.1: mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:08.821 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.821 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:16:08.821 altname enp152s0f0np0 00:16:08.821 altname ens817f0np0 00:16:08.821 inet 192.168.100.8/24 scope global mlx_0_0 00:16:08.821 valid_lft forever preferred_lft forever 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:08.821 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.821 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:16:08.821 altname enp152s0f1np1 00:16:08.821 altname ens817f1np1 00:16:08.821 inet 192.168.100.9/24 scope global mlx_0_1 00:16:08.821 valid_lft forever preferred_lft forever 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:08.821 192.168.100.9' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:08.821 192.168.100.9' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:08.821 192.168.100.9' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:08.821 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1816577 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1816577 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1816577 ']' 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.822 14:58:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1816921 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e181dec75ebec8e36bc7775b28b383c0a396158293e8b0c 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.v9i 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e181dec75ebec8e36bc7775b28b383c0a396158293e8b0c 0 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e181dec75ebec8e36bc7775b28b383c0a396158293e8b0c 0 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e181dec75ebec8e36bc7775b28b383c0a396158293e8b0c 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.v9i 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.v9i 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.v9i 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff36353175def7920f2ea2c162e796b4df55f533506477c5d783329425d93fba 00:16:09.391 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.47n 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff36353175def7920f2ea2c162e796b4df55f533506477c5d783329425d93fba 3 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff36353175def7920f2ea2c162e796b4df55f533506477c5d783329425d93fba 3 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff36353175def7920f2ea2c162e796b4df55f533506477c5d783329425d93fba 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.47n 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.47n 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.47n 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=921e9bd3c28dcd383c3c29f560ddbbf3 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3bt 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 921e9bd3c28dcd383c3c29f560ddbbf3 1 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 921e9bd3c28dcd383c3c29f560ddbbf3 1 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=921e9bd3c28dcd383c3c29f560ddbbf3 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3bt 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3bt 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.3bt 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=468e34601b3217fa917fa4fa21bf6d092674719b40fe01a4 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.anE 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 468e34601b3217fa917fa4fa21bf6d092674719b40fe01a4 2 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 468e34601b3217fa917fa4fa21bf6d092674719b40fe01a4 2 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=468e34601b3217fa917fa4fa21bf6d092674719b40fe01a4 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.anE 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.anE 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.anE 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=79435b81e07858c83d6ac0b81d6a651f38924dd98f850bcf 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kdW 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 79435b81e07858c83d6ac0b81d6a651f38924dd98f850bcf 2 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 79435b81e07858c83d6ac0b81d6a651f38924dd98f850bcf 2 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=79435b81e07858c83d6ac0b81d6a651f38924dd98f850bcf 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.654 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kdW 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kdW 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.kdW 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=646023b55a7af92d2bd89433331bc72c 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1hL 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 646023b55a7af92d2bd89433331bc72c 1 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 646023b55a7af92d2bd89433331bc72c 1 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=646023b55a7af92d2bd89433331bc72c 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:09.655 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1hL 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1hL 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.1hL 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=04d0c6522845f82d4fd496bfe85580ddac1e69aff664a060186d8ea451a9e469 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AmL 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 04d0c6522845f82d4fd496bfe85580ddac1e69aff664a060186d8ea451a9e469 3 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 04d0c6522845f82d4fd496bfe85580ddac1e69aff664a060186d8ea451a9e469 3 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=04d0c6522845f82d4fd496bfe85580ddac1e69aff664a060186d8ea451a9e469 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AmL 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AmL 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.AmL 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1816577 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1816577 ']' 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1816921 /var/tmp/host.sock 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1816921 ']' 00:16:09.983 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:09.984 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.984 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:09.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:09.984 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.984 14:58:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.v9i 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.v9i 00:16:10.328 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.v9i 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.47n ]] 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.47n 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.47n 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.47n 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.3bt 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.3bt 00:16:10.589 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.3bt 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.anE ]] 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.anE 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.anE 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.anE 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.kdW 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.kdW 00:16:10.850 14:58:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.kdW 00:16:11.109 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.1hL ]] 00:16:11.109 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1hL 00:16:11.109 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.109 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.109 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.109 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1hL 00:16:11.109 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1hL 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.AmL 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.AmL 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.AmL 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.369 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.629 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.630 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.630 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.889 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.889 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.889 { 00:16:11.889 "cntlid": 1, 00:16:11.889 "qid": 0, 00:16:11.889 "state": "enabled", 00:16:11.889 "thread": "nvmf_tgt_poll_group_000", 00:16:11.889 "listen_address": { 00:16:11.889 "trtype": "RDMA", 00:16:11.890 "adrfam": "IPv4", 00:16:11.890 "traddr": "192.168.100.8", 00:16:11.890 "trsvcid": "4420" 00:16:11.890 }, 00:16:11.890 "peer_address": { 00:16:11.890 "trtype": "RDMA", 00:16:11.890 "adrfam": "IPv4", 00:16:11.890 "traddr": "192.168.100.8", 00:16:11.890 "trsvcid": "33297" 00:16:11.890 }, 00:16:11.890 "auth": { 00:16:11.890 "state": "completed", 00:16:11.890 "digest": "sha256", 00:16:11.890 "dhgroup": "null" 00:16:11.890 } 00:16:11.890 } 00:16:11.890 ]' 00:16:11.890 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.150 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.150 14:58:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.150 14:58:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:12.150 14:58:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.150 14:58:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.150 14:58:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.150 14:58:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.427 14:58:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:16:12.998 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.259 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:13.259 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.259 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.259 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.259 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.259 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.259 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.518 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.778 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.778 { 00:16:13.778 "cntlid": 3, 00:16:13.778 "qid": 0, 00:16:13.778 "state": "enabled", 00:16:13.778 "thread": "nvmf_tgt_poll_group_000", 00:16:13.778 "listen_address": { 00:16:13.778 "trtype": "RDMA", 00:16:13.778 "adrfam": "IPv4", 00:16:13.778 "traddr": "192.168.100.8", 00:16:13.778 "trsvcid": "4420" 00:16:13.778 }, 00:16:13.778 "peer_address": { 00:16:13.778 "trtype": "RDMA", 00:16:13.778 "adrfam": "IPv4", 00:16:13.778 "traddr": "192.168.100.8", 00:16:13.778 "trsvcid": "33372" 00:16:13.778 }, 00:16:13.778 "auth": { 00:16:13.778 "state": "completed", 00:16:13.778 "digest": "sha256", 00:16:13.778 "dhgroup": "null" 00:16:13.778 } 00:16:13.778 } 00:16:13.778 ]' 00:16:13.778 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.039 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.039 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.039 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:14.039 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.039 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.039 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.039 14:58:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.298 14:58:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:16:14.871 14:58:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.131 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:15.131 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.131 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.131 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.131 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.131 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.131 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.391 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.391 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.652 { 00:16:15.652 "cntlid": 5, 00:16:15.652 "qid": 0, 00:16:15.652 "state": "enabled", 00:16:15.652 "thread": "nvmf_tgt_poll_group_000", 00:16:15.652 "listen_address": { 00:16:15.652 "trtype": "RDMA", 00:16:15.652 "adrfam": "IPv4", 00:16:15.652 "traddr": "192.168.100.8", 00:16:15.652 "trsvcid": "4420" 00:16:15.652 }, 00:16:15.652 "peer_address": { 00:16:15.652 "trtype": "RDMA", 00:16:15.652 "adrfam": "IPv4", 00:16:15.652 "traddr": "192.168.100.8", 00:16:15.652 "trsvcid": "58184" 00:16:15.652 }, 00:16:15.652 "auth": { 00:16:15.652 "state": "completed", 00:16:15.652 "digest": "sha256", 00:16:15.652 "dhgroup": "null" 00:16:15.652 } 00:16:15.652 } 00:16:15.652 ]' 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.652 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.913 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:15.913 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.913 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.913 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.913 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.913 14:58:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:16.854 14:58:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.114 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.374 00:16:17.374 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.374 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.374 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.634 { 00:16:17.634 "cntlid": 7, 00:16:17.634 "qid": 0, 00:16:17.634 "state": "enabled", 00:16:17.634 "thread": "nvmf_tgt_poll_group_000", 00:16:17.634 "listen_address": { 00:16:17.634 "trtype": "RDMA", 00:16:17.634 "adrfam": "IPv4", 00:16:17.634 "traddr": "192.168.100.8", 00:16:17.634 "trsvcid": "4420" 00:16:17.634 }, 00:16:17.634 "peer_address": { 00:16:17.634 "trtype": "RDMA", 00:16:17.634 "adrfam": "IPv4", 00:16:17.634 "traddr": "192.168.100.8", 00:16:17.634 "trsvcid": "56300" 00:16:17.634 }, 00:16:17.634 "auth": { 00:16:17.634 "state": "completed", 00:16:17.634 "digest": "sha256", 00:16:17.634 "dhgroup": "null" 00:16:17.634 } 00:16:17.634 } 00:16:17.634 ]' 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.634 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.910 14:58:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.849 14:58:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.109 00:16:19.109 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.109 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.109 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.369 { 00:16:19.369 "cntlid": 9, 00:16:19.369 "qid": 0, 00:16:19.369 "state": "enabled", 00:16:19.369 "thread": "nvmf_tgt_poll_group_000", 00:16:19.369 "listen_address": { 00:16:19.369 "trtype": "RDMA", 00:16:19.369 "adrfam": "IPv4", 00:16:19.369 "traddr": "192.168.100.8", 00:16:19.369 "trsvcid": "4420" 00:16:19.369 }, 00:16:19.369 "peer_address": { 00:16:19.369 "trtype": "RDMA", 00:16:19.369 "adrfam": "IPv4", 00:16:19.369 "traddr": "192.168.100.8", 00:16:19.369 "trsvcid": "40900" 00:16:19.369 }, 00:16:19.369 "auth": { 00:16:19.369 "state": "completed", 00:16:19.369 "digest": "sha256", 00:16:19.369 "dhgroup": "ffdhe2048" 00:16:19.369 } 00:16:19.369 } 00:16:19.369 ]' 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.369 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.629 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.629 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.629 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.629 14:58:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.568 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.827 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.088 00:16:21.088 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.088 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.088 14:58:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.088 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.088 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.088 14:58:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.088 14:58:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.088 14:58:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.088 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.088 { 00:16:21.088 "cntlid": 11, 00:16:21.088 "qid": 0, 00:16:21.088 "state": "enabled", 00:16:21.088 "thread": "nvmf_tgt_poll_group_000", 00:16:21.088 "listen_address": { 00:16:21.088 "trtype": "RDMA", 00:16:21.088 "adrfam": "IPv4", 00:16:21.088 "traddr": "192.168.100.8", 00:16:21.088 "trsvcid": "4420" 00:16:21.088 }, 00:16:21.088 "peer_address": { 00:16:21.088 "trtype": "RDMA", 00:16:21.088 "adrfam": "IPv4", 00:16:21.088 "traddr": "192.168.100.8", 00:16:21.088 "trsvcid": "49694" 00:16:21.088 }, 00:16:21.088 "auth": { 00:16:21.088 "state": "completed", 00:16:21.088 "digest": "sha256", 00:16:21.088 "dhgroup": "ffdhe2048" 00:16:21.088 } 00:16:21.088 } 00:16:21.088 ]' 00:16:21.088 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.348 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.348 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.348 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.348 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.348 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.348 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.348 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.608 14:58:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.547 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.808 00:16:22.808 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.808 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.808 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.069 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.069 14:58:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.069 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.069 14:58:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.069 14:58:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.069 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.069 { 00:16:23.069 "cntlid": 13, 00:16:23.069 "qid": 0, 00:16:23.069 "state": "enabled", 00:16:23.069 "thread": "nvmf_tgt_poll_group_000", 00:16:23.069 "listen_address": { 00:16:23.069 "trtype": "RDMA", 00:16:23.069 "adrfam": "IPv4", 00:16:23.069 "traddr": "192.168.100.8", 00:16:23.069 "trsvcid": "4420" 00:16:23.069 }, 00:16:23.069 "peer_address": { 00:16:23.069 "trtype": "RDMA", 00:16:23.069 "adrfam": "IPv4", 00:16:23.069 "traddr": "192.168.100.8", 00:16:23.069 "trsvcid": "43861" 00:16:23.069 }, 00:16:23.069 "auth": { 00:16:23.069 "state": "completed", 00:16:23.069 "digest": "sha256", 00:16:23.069 "dhgroup": "ffdhe2048" 00:16:23.069 } 00:16:23.069 } 00:16:23.069 ]' 00:16:23.069 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.069 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.069 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.069 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.069 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.330 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.330 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.330 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.330 14:58:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.272 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.533 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.794 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.794 { 00:16:24.794 "cntlid": 15, 00:16:24.794 "qid": 0, 00:16:24.794 "state": "enabled", 00:16:24.794 "thread": "nvmf_tgt_poll_group_000", 00:16:24.794 "listen_address": { 00:16:24.794 "trtype": "RDMA", 00:16:24.794 "adrfam": "IPv4", 00:16:24.794 "traddr": "192.168.100.8", 00:16:24.794 "trsvcid": "4420" 00:16:24.794 }, 00:16:24.794 "peer_address": { 00:16:24.794 "trtype": "RDMA", 00:16:24.794 "adrfam": "IPv4", 00:16:24.794 "traddr": "192.168.100.8", 00:16:24.794 "trsvcid": "56780" 00:16:24.794 }, 00:16:24.794 "auth": { 00:16:24.794 "state": "completed", 00:16:24.794 "digest": "sha256", 00:16:24.794 "dhgroup": "ffdhe2048" 00:16:24.794 } 00:16:24.794 } 00:16:24.794 ]' 00:16:24.794 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.054 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.054 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.054 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.054 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.054 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.054 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.054 14:58:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.313 14:58:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:16:26.252 14:58:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.252 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.512 00:16:26.512 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.512 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.512 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.772 { 00:16:26.772 "cntlid": 17, 00:16:26.772 "qid": 0, 00:16:26.772 "state": "enabled", 00:16:26.772 "thread": "nvmf_tgt_poll_group_000", 00:16:26.772 "listen_address": { 00:16:26.772 "trtype": "RDMA", 00:16:26.772 "adrfam": "IPv4", 00:16:26.772 "traddr": "192.168.100.8", 00:16:26.772 "trsvcid": "4420" 00:16:26.772 }, 00:16:26.772 "peer_address": { 00:16:26.772 "trtype": "RDMA", 00:16:26.772 "adrfam": "IPv4", 00:16:26.772 "traddr": "192.168.100.8", 00:16:26.772 "trsvcid": "34512" 00:16:26.772 }, 00:16:26.772 "auth": { 00:16:26.772 "state": "completed", 00:16:26.772 "digest": "sha256", 00:16:26.772 "dhgroup": "ffdhe3072" 00:16:26.772 } 00:16:26.772 } 00:16:26.772 ]' 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.772 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.033 14:58:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.981 14:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.241 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.501 00:16:28.501 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.501 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.501 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.501 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.501 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.502 14:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.502 14:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.502 14:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.502 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.502 { 00:16:28.502 "cntlid": 19, 00:16:28.502 "qid": 0, 00:16:28.502 "state": "enabled", 00:16:28.502 "thread": "nvmf_tgt_poll_group_000", 00:16:28.502 "listen_address": { 00:16:28.502 "trtype": "RDMA", 00:16:28.502 "adrfam": "IPv4", 00:16:28.502 "traddr": "192.168.100.8", 00:16:28.502 "trsvcid": "4420" 00:16:28.502 }, 00:16:28.502 "peer_address": { 00:16:28.502 "trtype": "RDMA", 00:16:28.502 "adrfam": "IPv4", 00:16:28.502 "traddr": "192.168.100.8", 00:16:28.502 "trsvcid": "39459" 00:16:28.502 }, 00:16:28.502 "auth": { 00:16:28.502 "state": "completed", 00:16:28.502 "digest": "sha256", 00:16:28.502 "dhgroup": "ffdhe3072" 00:16:28.502 } 00:16:28.502 } 00:16:28.502 ]' 00:16:28.502 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.761 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.761 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.761 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.761 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.761 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.761 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.761 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.020 14:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:16:29.588 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.848 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:29.848 14:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.848 14:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.848 14:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.848 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.848 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.848 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.109 14:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.368 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.368 { 00:16:30.368 "cntlid": 21, 00:16:30.368 "qid": 0, 00:16:30.368 "state": "enabled", 00:16:30.368 "thread": "nvmf_tgt_poll_group_000", 00:16:30.368 "listen_address": { 00:16:30.368 "trtype": "RDMA", 00:16:30.368 "adrfam": "IPv4", 00:16:30.368 "traddr": "192.168.100.8", 00:16:30.368 "trsvcid": "4420" 00:16:30.368 }, 00:16:30.368 "peer_address": { 00:16:30.368 "trtype": "RDMA", 00:16:30.368 "adrfam": "IPv4", 00:16:30.368 "traddr": "192.168.100.8", 00:16:30.368 "trsvcid": "33945" 00:16:30.368 }, 00:16:30.368 "auth": { 00:16:30.368 "state": "completed", 00:16:30.368 "digest": "sha256", 00:16:30.368 "dhgroup": "ffdhe3072" 00:16:30.368 } 00:16:30.368 } 00:16:30.368 ]' 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.368 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.627 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.627 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.627 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.627 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.627 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.887 14:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:16:31.456 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.716 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.716 14:58:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.716 14:58:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.716 14:58:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.716 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.716 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.716 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.979 14:58:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.979 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.239 { 00:16:32.239 "cntlid": 23, 00:16:32.239 "qid": 0, 00:16:32.239 "state": "enabled", 00:16:32.239 "thread": "nvmf_tgt_poll_group_000", 00:16:32.239 "listen_address": { 00:16:32.239 "trtype": "RDMA", 00:16:32.239 "adrfam": "IPv4", 00:16:32.239 "traddr": "192.168.100.8", 00:16:32.239 "trsvcid": "4420" 00:16:32.239 }, 00:16:32.239 "peer_address": { 00:16:32.239 "trtype": "RDMA", 00:16:32.239 "adrfam": "IPv4", 00:16:32.239 "traddr": "192.168.100.8", 00:16:32.239 "trsvcid": "37324" 00:16:32.239 }, 00:16:32.239 "auth": { 00:16:32.239 "state": "completed", 00:16:32.239 "digest": "sha256", 00:16:32.239 "dhgroup": "ffdhe3072" 00:16:32.239 } 00:16:32.239 } 00:16:32.239 ]' 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.239 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.529 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.529 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.529 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.529 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.529 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.529 14:58:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.471 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.732 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.993 00:16:33.993 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.993 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.993 14:58:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.253 { 00:16:34.253 "cntlid": 25, 00:16:34.253 "qid": 0, 00:16:34.253 "state": "enabled", 00:16:34.253 "thread": "nvmf_tgt_poll_group_000", 00:16:34.253 "listen_address": { 00:16:34.253 "trtype": "RDMA", 00:16:34.253 "adrfam": "IPv4", 00:16:34.253 "traddr": "192.168.100.8", 00:16:34.253 "trsvcid": "4420" 00:16:34.253 }, 00:16:34.253 "peer_address": { 00:16:34.253 "trtype": "RDMA", 00:16:34.253 "adrfam": "IPv4", 00:16:34.253 "traddr": "192.168.100.8", 00:16:34.253 "trsvcid": "34920" 00:16:34.253 }, 00:16:34.253 "auth": { 00:16:34.253 "state": "completed", 00:16:34.253 "digest": "sha256", 00:16:34.253 "dhgroup": "ffdhe4096" 00:16:34.253 } 00:16:34.253 } 00:16:34.253 ]' 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.253 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.514 14:58:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.456 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.717 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.977 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.977 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.977 { 00:16:35.977 "cntlid": 27, 00:16:35.977 "qid": 0, 00:16:35.977 "state": "enabled", 00:16:35.977 "thread": "nvmf_tgt_poll_group_000", 00:16:35.977 "listen_address": { 00:16:35.977 "trtype": "RDMA", 00:16:35.977 "adrfam": "IPv4", 00:16:35.977 "traddr": "192.168.100.8", 00:16:35.977 "trsvcid": "4420" 00:16:35.977 }, 00:16:35.977 "peer_address": { 00:16:35.977 "trtype": "RDMA", 00:16:35.977 "adrfam": "IPv4", 00:16:35.977 "traddr": "192.168.100.8", 00:16:35.977 "trsvcid": "50829" 00:16:35.977 }, 00:16:35.977 "auth": { 00:16:35.977 "state": "completed", 00:16:35.977 "digest": "sha256", 00:16:35.977 "dhgroup": "ffdhe4096" 00:16:35.977 } 00:16:35.977 } 00:16:35.977 ]' 00:16:35.978 14:58:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.978 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.978 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.238 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.238 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.238 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.238 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.238 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.238 14:58:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:16:37.179 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.179 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:37.179 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.179 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.438 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.696 00:16:37.696 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.696 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.696 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.955 { 00:16:37.955 "cntlid": 29, 00:16:37.955 "qid": 0, 00:16:37.955 "state": "enabled", 00:16:37.955 "thread": "nvmf_tgt_poll_group_000", 00:16:37.955 "listen_address": { 00:16:37.955 "trtype": "RDMA", 00:16:37.955 "adrfam": "IPv4", 00:16:37.955 "traddr": "192.168.100.8", 00:16:37.955 "trsvcid": "4420" 00:16:37.955 }, 00:16:37.955 "peer_address": { 00:16:37.955 "trtype": "RDMA", 00:16:37.955 "adrfam": "IPv4", 00:16:37.955 "traddr": "192.168.100.8", 00:16:37.955 "trsvcid": "43752" 00:16:37.955 }, 00:16:37.955 "auth": { 00:16:37.955 "state": "completed", 00:16:37.955 "digest": "sha256", 00:16:37.955 "dhgroup": "ffdhe4096" 00:16:37.955 } 00:16:37.955 } 00:16:37.955 ]' 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.955 14:58:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.955 14:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.955 14:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.955 14:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.215 14:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:16:39.152 14:58:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.152 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:39.152 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.152 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.152 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.152 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.152 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.152 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.412 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.695 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.695 14:58:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.995 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.995 { 00:16:39.995 "cntlid": 31, 00:16:39.995 "qid": 0, 00:16:39.995 "state": "enabled", 00:16:39.995 "thread": "nvmf_tgt_poll_group_000", 00:16:39.995 "listen_address": { 00:16:39.995 "trtype": "RDMA", 00:16:39.995 "adrfam": "IPv4", 00:16:39.995 "traddr": "192.168.100.8", 00:16:39.995 "trsvcid": "4420" 00:16:39.995 }, 00:16:39.995 "peer_address": { 00:16:39.995 "trtype": "RDMA", 00:16:39.995 "adrfam": "IPv4", 00:16:39.995 "traddr": "192.168.100.8", 00:16:39.995 "trsvcid": "33303" 00:16:39.995 }, 00:16:39.995 "auth": { 00:16:39.995 "state": "completed", 00:16:39.995 "digest": "sha256", 00:16:39.995 "dhgroup": "ffdhe4096" 00:16:39.996 } 00:16:39.996 } 00:16:39.996 ]' 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.996 14:58:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.996 14:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:16:40.934 14:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.934 14:58:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:40.934 14:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.934 14:58:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.193 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.452 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.711 { 00:16:41.711 "cntlid": 33, 00:16:41.711 "qid": 0, 00:16:41.711 "state": "enabled", 00:16:41.711 "thread": "nvmf_tgt_poll_group_000", 00:16:41.711 "listen_address": { 00:16:41.711 "trtype": "RDMA", 00:16:41.711 "adrfam": "IPv4", 00:16:41.711 "traddr": "192.168.100.8", 00:16:41.711 "trsvcid": "4420" 00:16:41.711 }, 00:16:41.711 "peer_address": { 00:16:41.711 "trtype": "RDMA", 00:16:41.711 "adrfam": "IPv4", 00:16:41.711 "traddr": "192.168.100.8", 00:16:41.711 "trsvcid": "43205" 00:16:41.711 }, 00:16:41.711 "auth": { 00:16:41.711 "state": "completed", 00:16:41.711 "digest": "sha256", 00:16:41.711 "dhgroup": "ffdhe6144" 00:16:41.711 } 00:16:41.711 } 00:16:41.711 ]' 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.711 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.971 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.971 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.971 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.971 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.971 14:58:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.971 14:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:42.908 14:58:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.167 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.425 00:16:43.425 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.425 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.425 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.684 { 00:16:43.684 "cntlid": 35, 00:16:43.684 "qid": 0, 00:16:43.684 "state": "enabled", 00:16:43.684 "thread": "nvmf_tgt_poll_group_000", 00:16:43.684 "listen_address": { 00:16:43.684 "trtype": "RDMA", 00:16:43.684 "adrfam": "IPv4", 00:16:43.684 "traddr": "192.168.100.8", 00:16:43.684 "trsvcid": "4420" 00:16:43.684 }, 00:16:43.684 "peer_address": { 00:16:43.684 "trtype": "RDMA", 00:16:43.684 "adrfam": "IPv4", 00:16:43.684 "traddr": "192.168.100.8", 00:16:43.684 "trsvcid": "59532" 00:16:43.684 }, 00:16:43.684 "auth": { 00:16:43.684 "state": "completed", 00:16:43.684 "digest": "sha256", 00:16:43.684 "dhgroup": "ffdhe6144" 00:16:43.684 } 00:16:43.684 } 00:16:43.684 ]' 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.684 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.944 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.944 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.944 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.944 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.944 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.944 14:58:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.896 14:59:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.154 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.412 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.670 { 00:16:45.670 "cntlid": 37, 00:16:45.670 "qid": 0, 00:16:45.670 "state": "enabled", 00:16:45.670 "thread": "nvmf_tgt_poll_group_000", 00:16:45.670 "listen_address": { 00:16:45.670 "trtype": "RDMA", 00:16:45.670 "adrfam": "IPv4", 00:16:45.670 "traddr": "192.168.100.8", 00:16:45.670 "trsvcid": "4420" 00:16:45.670 }, 00:16:45.670 "peer_address": { 00:16:45.670 "trtype": "RDMA", 00:16:45.670 "adrfam": "IPv4", 00:16:45.670 "traddr": "192.168.100.8", 00:16:45.670 "trsvcid": "32829" 00:16:45.670 }, 00:16:45.670 "auth": { 00:16:45.670 "state": "completed", 00:16:45.670 "digest": "sha256", 00:16:45.670 "dhgroup": "ffdhe6144" 00:16:45.670 } 00:16:45.670 } 00:16:45.670 ]' 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.670 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.930 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.930 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.930 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.930 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.930 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.930 14:59:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.866 14:59:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.126 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.695 00:16:47.695 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.695 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.695 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.695 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.695 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.695 14:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.695 14:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.696 14:59:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.696 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.696 { 00:16:47.696 "cntlid": 39, 00:16:47.696 "qid": 0, 00:16:47.696 "state": "enabled", 00:16:47.696 "thread": "nvmf_tgt_poll_group_000", 00:16:47.696 "listen_address": { 00:16:47.696 "trtype": "RDMA", 00:16:47.696 "adrfam": "IPv4", 00:16:47.696 "traddr": "192.168.100.8", 00:16:47.696 "trsvcid": "4420" 00:16:47.696 }, 00:16:47.696 "peer_address": { 00:16:47.696 "trtype": "RDMA", 00:16:47.696 "adrfam": "IPv4", 00:16:47.696 "traddr": "192.168.100.8", 00:16:47.696 "trsvcid": "34704" 00:16:47.696 }, 00:16:47.696 "auth": { 00:16:47.696 "state": "completed", 00:16:47.696 "digest": "sha256", 00:16:47.696 "dhgroup": "ffdhe6144" 00:16:47.696 } 00:16:47.696 } 00:16:47.696 ]' 00:16:47.696 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.696 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.696 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.696 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.696 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.955 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.955 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.955 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.956 14:59:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.894 14:59:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.155 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.725 00:16:49.725 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.725 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.725 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.985 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.985 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.985 14:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.985 14:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.985 14:59:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.985 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.985 { 00:16:49.985 "cntlid": 41, 00:16:49.985 "qid": 0, 00:16:49.985 "state": "enabled", 00:16:49.985 "thread": "nvmf_tgt_poll_group_000", 00:16:49.985 "listen_address": { 00:16:49.985 "trtype": "RDMA", 00:16:49.985 "adrfam": "IPv4", 00:16:49.985 "traddr": "192.168.100.8", 00:16:49.985 "trsvcid": "4420" 00:16:49.985 }, 00:16:49.985 "peer_address": { 00:16:49.985 "trtype": "RDMA", 00:16:49.985 "adrfam": "IPv4", 00:16:49.986 "traddr": "192.168.100.8", 00:16:49.986 "trsvcid": "33005" 00:16:49.986 }, 00:16:49.986 "auth": { 00:16:49.986 "state": "completed", 00:16:49.986 "digest": "sha256", 00:16:49.986 "dhgroup": "ffdhe8192" 00:16:49.986 } 00:16:49.986 } 00:16:49.986 ]' 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.986 14:59:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.245 14:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:16:51.188 14:59:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.188 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.759 00:16:51.759 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.759 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.759 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.019 { 00:16:52.019 "cntlid": 43, 00:16:52.019 "qid": 0, 00:16:52.019 "state": "enabled", 00:16:52.019 "thread": "nvmf_tgt_poll_group_000", 00:16:52.019 "listen_address": { 00:16:52.019 "trtype": "RDMA", 00:16:52.019 "adrfam": "IPv4", 00:16:52.019 "traddr": "192.168.100.8", 00:16:52.019 "trsvcid": "4420" 00:16:52.019 }, 00:16:52.019 "peer_address": { 00:16:52.019 "trtype": "RDMA", 00:16:52.019 "adrfam": "IPv4", 00:16:52.019 "traddr": "192.168.100.8", 00:16:52.019 "trsvcid": "36798" 00:16:52.019 }, 00:16:52.019 "auth": { 00:16:52.019 "state": "completed", 00:16:52.019 "digest": "sha256", 00:16:52.019 "dhgroup": "ffdhe8192" 00:16:52.019 } 00:16:52.019 } 00:16:52.019 ]' 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.019 14:59:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.020 14:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.020 14:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.020 14:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.020 14:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.020 14:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.281 14:59:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.221 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.480 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.481 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.051 00:16:54.051 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.051 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.051 14:59:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.051 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.051 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.051 14:59:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.051 14:59:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.051 14:59:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.051 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.051 { 00:16:54.051 "cntlid": 45, 00:16:54.051 "qid": 0, 00:16:54.051 "state": "enabled", 00:16:54.051 "thread": "nvmf_tgt_poll_group_000", 00:16:54.051 "listen_address": { 00:16:54.051 "trtype": "RDMA", 00:16:54.051 "adrfam": "IPv4", 00:16:54.051 "traddr": "192.168.100.8", 00:16:54.051 "trsvcid": "4420" 00:16:54.051 }, 00:16:54.051 "peer_address": { 00:16:54.051 "trtype": "RDMA", 00:16:54.051 "adrfam": "IPv4", 00:16:54.051 "traddr": "192.168.100.8", 00:16:54.051 "trsvcid": "51476" 00:16:54.051 }, 00:16:54.051 "auth": { 00:16:54.051 "state": "completed", 00:16:54.051 "digest": "sha256", 00:16:54.051 "dhgroup": "ffdhe8192" 00:16:54.051 } 00:16:54.051 } 00:16:54.051 ]' 00:16:54.051 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.311 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.311 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.311 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.311 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.311 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.311 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.311 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.572 14:59:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:16:55.143 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.403 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.403 14:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.403 14:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.403 14:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.403 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.403 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.403 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.664 14:59:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.235 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.235 { 00:16:56.235 "cntlid": 47, 00:16:56.235 "qid": 0, 00:16:56.235 "state": "enabled", 00:16:56.235 "thread": "nvmf_tgt_poll_group_000", 00:16:56.235 "listen_address": { 00:16:56.235 "trtype": "RDMA", 00:16:56.235 "adrfam": "IPv4", 00:16:56.235 "traddr": "192.168.100.8", 00:16:56.235 "trsvcid": "4420" 00:16:56.235 }, 00:16:56.235 "peer_address": { 00:16:56.235 "trtype": "RDMA", 00:16:56.235 "adrfam": "IPv4", 00:16:56.235 "traddr": "192.168.100.8", 00:16:56.235 "trsvcid": "57683" 00:16:56.235 }, 00:16:56.235 "auth": { 00:16:56.235 "state": "completed", 00:16:56.235 "digest": "sha256", 00:16:56.235 "dhgroup": "ffdhe8192" 00:16:56.235 } 00:16:56.235 } 00:16:56.235 ]' 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.235 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.496 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.496 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.496 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.496 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.496 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.496 14:59:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.435 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.694 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.955 00:16:57.955 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.955 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.955 14:59:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.215 { 00:16:58.215 "cntlid": 49, 00:16:58.215 "qid": 0, 00:16:58.215 "state": "enabled", 00:16:58.215 "thread": "nvmf_tgt_poll_group_000", 00:16:58.215 "listen_address": { 00:16:58.215 "trtype": "RDMA", 00:16:58.215 "adrfam": "IPv4", 00:16:58.215 "traddr": "192.168.100.8", 00:16:58.215 "trsvcid": "4420" 00:16:58.215 }, 00:16:58.215 "peer_address": { 00:16:58.215 "trtype": "RDMA", 00:16:58.215 "adrfam": "IPv4", 00:16:58.215 "traddr": "192.168.100.8", 00:16:58.215 "trsvcid": "50886" 00:16:58.215 }, 00:16:58.215 "auth": { 00:16:58.215 "state": "completed", 00:16:58.215 "digest": "sha384", 00:16:58.215 "dhgroup": "null" 00:16:58.215 } 00:16:58.215 } 00:16:58.215 ]' 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.215 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.475 14:59:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.415 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.675 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.676 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.676 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.936 { 00:16:59.936 "cntlid": 51, 00:16:59.936 "qid": 0, 00:16:59.936 "state": "enabled", 00:16:59.936 "thread": "nvmf_tgt_poll_group_000", 00:16:59.936 "listen_address": { 00:16:59.936 "trtype": "RDMA", 00:16:59.936 "adrfam": "IPv4", 00:16:59.936 "traddr": "192.168.100.8", 00:16:59.936 "trsvcid": "4420" 00:16:59.936 }, 00:16:59.936 "peer_address": { 00:16:59.936 "trtype": "RDMA", 00:16:59.936 "adrfam": "IPv4", 00:16:59.936 "traddr": "192.168.100.8", 00:16:59.936 "trsvcid": "59577" 00:16:59.936 }, 00:16:59.936 "auth": { 00:16:59.936 "state": "completed", 00:16:59.936 "digest": "sha384", 00:16:59.936 "dhgroup": "null" 00:16:59.936 } 00:16:59.936 } 00:16:59.936 ]' 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:59.936 14:59:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.197 14:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.198 14:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.198 14:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.198 14:59:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.138 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.398 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.657 00:17:01.657 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.657 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.657 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.917 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.917 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.917 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.917 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.917 14:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.917 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.917 { 00:17:01.917 "cntlid": 53, 00:17:01.917 "qid": 0, 00:17:01.917 "state": "enabled", 00:17:01.917 "thread": "nvmf_tgt_poll_group_000", 00:17:01.917 "listen_address": { 00:17:01.917 "trtype": "RDMA", 00:17:01.917 "adrfam": "IPv4", 00:17:01.917 "traddr": "192.168.100.8", 00:17:01.917 "trsvcid": "4420" 00:17:01.917 }, 00:17:01.917 "peer_address": { 00:17:01.917 "trtype": "RDMA", 00:17:01.917 "adrfam": "IPv4", 00:17:01.917 "traddr": "192.168.100.8", 00:17:01.918 "trsvcid": "35421" 00:17:01.918 }, 00:17:01.918 "auth": { 00:17:01.918 "state": "completed", 00:17:01.918 "digest": "sha384", 00:17:01.918 "dhgroup": "null" 00:17:01.918 } 00:17:01.918 } 00:17:01.918 ]' 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.918 14:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.178 14:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.119 14:59:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.119 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.380 00:17:03.380 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.380 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.380 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.640 { 00:17:03.640 "cntlid": 55, 00:17:03.640 "qid": 0, 00:17:03.640 "state": "enabled", 00:17:03.640 "thread": "nvmf_tgt_poll_group_000", 00:17:03.640 "listen_address": { 00:17:03.640 "trtype": "RDMA", 00:17:03.640 "adrfam": "IPv4", 00:17:03.640 "traddr": "192.168.100.8", 00:17:03.640 "trsvcid": "4420" 00:17:03.640 }, 00:17:03.640 "peer_address": { 00:17:03.640 "trtype": "RDMA", 00:17:03.640 "adrfam": "IPv4", 00:17:03.640 "traddr": "192.168.100.8", 00:17:03.640 "trsvcid": "44355" 00:17:03.640 }, 00:17:03.640 "auth": { 00:17:03.640 "state": "completed", 00:17:03.640 "digest": "sha384", 00:17:03.640 "dhgroup": "null" 00:17:03.640 } 00:17:03.640 } 00:17:03.640 ]' 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.640 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.900 14:59:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:04.861 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.121 14:59:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.121 00:17:05.121 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.121 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.121 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.381 { 00:17:05.381 "cntlid": 57, 00:17:05.381 "qid": 0, 00:17:05.381 "state": "enabled", 00:17:05.381 "thread": "nvmf_tgt_poll_group_000", 00:17:05.381 "listen_address": { 00:17:05.381 "trtype": "RDMA", 00:17:05.381 "adrfam": "IPv4", 00:17:05.381 "traddr": "192.168.100.8", 00:17:05.381 "trsvcid": "4420" 00:17:05.381 }, 00:17:05.381 "peer_address": { 00:17:05.381 "trtype": "RDMA", 00:17:05.381 "adrfam": "IPv4", 00:17:05.381 "traddr": "192.168.100.8", 00:17:05.381 "trsvcid": "34190" 00:17:05.381 }, 00:17:05.381 "auth": { 00:17:05.381 "state": "completed", 00:17:05.381 "digest": "sha384", 00:17:05.381 "dhgroup": "ffdhe2048" 00:17:05.381 } 00:17:05.381 } 00:17:05.381 ]' 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.381 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.641 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.641 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.641 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.641 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.641 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.641 14:59:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.579 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.838 14:59:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.097 00:17:07.097 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.097 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.097 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.356 { 00:17:07.356 "cntlid": 59, 00:17:07.356 "qid": 0, 00:17:07.356 "state": "enabled", 00:17:07.356 "thread": "nvmf_tgt_poll_group_000", 00:17:07.356 "listen_address": { 00:17:07.356 "trtype": "RDMA", 00:17:07.356 "adrfam": "IPv4", 00:17:07.356 "traddr": "192.168.100.8", 00:17:07.356 "trsvcid": "4420" 00:17:07.356 }, 00:17:07.356 "peer_address": { 00:17:07.356 "trtype": "RDMA", 00:17:07.356 "adrfam": "IPv4", 00:17:07.356 "traddr": "192.168.100.8", 00:17:07.356 "trsvcid": "50256" 00:17:07.356 }, 00:17:07.356 "auth": { 00:17:07.356 "state": "completed", 00:17:07.356 "digest": "sha384", 00:17:07.356 "dhgroup": "ffdhe2048" 00:17:07.356 } 00:17:07.356 } 00:17:07.356 ]' 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.356 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.615 14:59:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.558 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.818 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.819 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.819 00:17:08.819 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.819 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.819 14:59:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.078 { 00:17:09.078 "cntlid": 61, 00:17:09.078 "qid": 0, 00:17:09.078 "state": "enabled", 00:17:09.078 "thread": "nvmf_tgt_poll_group_000", 00:17:09.078 "listen_address": { 00:17:09.078 "trtype": "RDMA", 00:17:09.078 "adrfam": "IPv4", 00:17:09.078 "traddr": "192.168.100.8", 00:17:09.078 "trsvcid": "4420" 00:17:09.078 }, 00:17:09.078 "peer_address": { 00:17:09.078 "trtype": "RDMA", 00:17:09.078 "adrfam": "IPv4", 00:17:09.078 "traddr": "192.168.100.8", 00:17:09.078 "trsvcid": "52582" 00:17:09.078 }, 00:17:09.078 "auth": { 00:17:09.078 "state": "completed", 00:17:09.078 "digest": "sha384", 00:17:09.078 "dhgroup": "ffdhe2048" 00:17:09.078 } 00:17:09.078 } 00:17:09.078 ]' 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.078 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.079 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.079 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.338 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.338 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.338 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.338 14:59:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.281 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.607 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.884 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.884 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.884 { 00:17:10.884 "cntlid": 63, 00:17:10.884 "qid": 0, 00:17:10.884 "state": "enabled", 00:17:10.884 "thread": "nvmf_tgt_poll_group_000", 00:17:10.884 "listen_address": { 00:17:10.884 "trtype": "RDMA", 00:17:10.884 "adrfam": "IPv4", 00:17:10.884 "traddr": "192.168.100.8", 00:17:10.884 "trsvcid": "4420" 00:17:10.884 }, 00:17:10.884 "peer_address": { 00:17:10.884 "trtype": "RDMA", 00:17:10.884 "adrfam": "IPv4", 00:17:10.884 "traddr": "192.168.100.8", 00:17:10.884 "trsvcid": "59497" 00:17:10.884 }, 00:17:10.884 "auth": { 00:17:10.884 "state": "completed", 00:17:10.884 "digest": "sha384", 00:17:10.885 "dhgroup": "ffdhe2048" 00:17:10.885 } 00:17:10.885 } 00:17:10.885 ]' 00:17:10.885 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.885 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.885 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.144 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.144 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.144 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.144 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.144 14:59:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.144 14:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:12.085 14:59:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.085 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.346 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.607 00:17:12.607 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.607 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.607 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.867 { 00:17:12.867 "cntlid": 65, 00:17:12.867 "qid": 0, 00:17:12.867 "state": "enabled", 00:17:12.867 "thread": "nvmf_tgt_poll_group_000", 00:17:12.867 "listen_address": { 00:17:12.867 "trtype": "RDMA", 00:17:12.867 "adrfam": "IPv4", 00:17:12.867 "traddr": "192.168.100.8", 00:17:12.867 "trsvcid": "4420" 00:17:12.867 }, 00:17:12.867 "peer_address": { 00:17:12.867 "trtype": "RDMA", 00:17:12.867 "adrfam": "IPv4", 00:17:12.867 "traddr": "192.168.100.8", 00:17:12.867 "trsvcid": "33323" 00:17:12.867 }, 00:17:12.867 "auth": { 00:17:12.867 "state": "completed", 00:17:12.867 "digest": "sha384", 00:17:12.867 "dhgroup": "ffdhe3072" 00:17:12.867 } 00:17:12.867 } 00:17:12.867 ]' 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.867 14:59:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.127 14:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.067 14:59:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.067 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.327 00:17:14.328 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.328 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.328 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.588 { 00:17:14.588 "cntlid": 67, 00:17:14.588 "qid": 0, 00:17:14.588 "state": "enabled", 00:17:14.588 "thread": "nvmf_tgt_poll_group_000", 00:17:14.588 "listen_address": { 00:17:14.588 "trtype": "RDMA", 00:17:14.588 "adrfam": "IPv4", 00:17:14.588 "traddr": "192.168.100.8", 00:17:14.588 "trsvcid": "4420" 00:17:14.588 }, 00:17:14.588 "peer_address": { 00:17:14.588 "trtype": "RDMA", 00:17:14.588 "adrfam": "IPv4", 00:17:14.588 "traddr": "192.168.100.8", 00:17:14.588 "trsvcid": "52073" 00:17:14.588 }, 00:17:14.588 "auth": { 00:17:14.588 "state": "completed", 00:17:14.588 "digest": "sha384", 00:17:14.588 "dhgroup": "ffdhe3072" 00:17:14.588 } 00:17:14.588 } 00:17:14.588 ]' 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.588 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.848 14:59:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.789 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.050 14:59:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.311 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.311 { 00:17:16.311 "cntlid": 69, 00:17:16.311 "qid": 0, 00:17:16.311 "state": "enabled", 00:17:16.311 "thread": "nvmf_tgt_poll_group_000", 00:17:16.311 "listen_address": { 00:17:16.311 "trtype": "RDMA", 00:17:16.311 "adrfam": "IPv4", 00:17:16.311 "traddr": "192.168.100.8", 00:17:16.311 "trsvcid": "4420" 00:17:16.311 }, 00:17:16.311 "peer_address": { 00:17:16.311 "trtype": "RDMA", 00:17:16.311 "adrfam": "IPv4", 00:17:16.311 "traddr": "192.168.100.8", 00:17:16.311 "trsvcid": "35766" 00:17:16.311 }, 00:17:16.311 "auth": { 00:17:16.311 "state": "completed", 00:17:16.311 "digest": "sha384", 00:17:16.311 "dhgroup": "ffdhe3072" 00:17:16.311 } 00:17:16.311 } 00:17:16.311 ]' 00:17:16.311 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.572 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.572 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.572 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.572 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.572 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.572 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.572 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.832 14:59:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.771 14:59:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.032 00:17:18.032 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.032 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.032 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.292 { 00:17:18.292 "cntlid": 71, 00:17:18.292 "qid": 0, 00:17:18.292 "state": "enabled", 00:17:18.292 "thread": "nvmf_tgt_poll_group_000", 00:17:18.292 "listen_address": { 00:17:18.292 "trtype": "RDMA", 00:17:18.292 "adrfam": "IPv4", 00:17:18.292 "traddr": "192.168.100.8", 00:17:18.292 "trsvcid": "4420" 00:17:18.292 }, 00:17:18.292 "peer_address": { 00:17:18.292 "trtype": "RDMA", 00:17:18.292 "adrfam": "IPv4", 00:17:18.292 "traddr": "192.168.100.8", 00:17:18.292 "trsvcid": "45928" 00:17:18.292 }, 00:17:18.292 "auth": { 00:17:18.292 "state": "completed", 00:17:18.292 "digest": "sha384", 00:17:18.292 "dhgroup": "ffdhe3072" 00:17:18.292 } 00:17:18.292 } 00:17:18.292 ]' 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.292 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.553 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.553 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.553 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.553 14:59:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.493 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.754 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.014 00:17:20.014 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.014 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.014 14:59:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.014 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.274 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.275 { 00:17:20.275 "cntlid": 73, 00:17:20.275 "qid": 0, 00:17:20.275 "state": "enabled", 00:17:20.275 "thread": "nvmf_tgt_poll_group_000", 00:17:20.275 "listen_address": { 00:17:20.275 "trtype": "RDMA", 00:17:20.275 "adrfam": "IPv4", 00:17:20.275 "traddr": "192.168.100.8", 00:17:20.275 "trsvcid": "4420" 00:17:20.275 }, 00:17:20.275 "peer_address": { 00:17:20.275 "trtype": "RDMA", 00:17:20.275 "adrfam": "IPv4", 00:17:20.275 "traddr": "192.168.100.8", 00:17:20.275 "trsvcid": "45772" 00:17:20.275 }, 00:17:20.275 "auth": { 00:17:20.275 "state": "completed", 00:17:20.275 "digest": "sha384", 00:17:20.275 "dhgroup": "ffdhe4096" 00:17:20.275 } 00:17:20.275 } 00:17:20.275 ]' 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.275 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.535 14:59:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.479 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.740 00:17:21.740 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.740 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.740 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.001 { 00:17:22.001 "cntlid": 75, 00:17:22.001 "qid": 0, 00:17:22.001 "state": "enabled", 00:17:22.001 "thread": "nvmf_tgt_poll_group_000", 00:17:22.001 "listen_address": { 00:17:22.001 "trtype": "RDMA", 00:17:22.001 "adrfam": "IPv4", 00:17:22.001 "traddr": "192.168.100.8", 00:17:22.001 "trsvcid": "4420" 00:17:22.001 }, 00:17:22.001 "peer_address": { 00:17:22.001 "trtype": "RDMA", 00:17:22.001 "adrfam": "IPv4", 00:17:22.001 "traddr": "192.168.100.8", 00:17:22.001 "trsvcid": "56926" 00:17:22.001 }, 00:17:22.001 "auth": { 00:17:22.001 "state": "completed", 00:17:22.001 "digest": "sha384", 00:17:22.001 "dhgroup": "ffdhe4096" 00:17:22.001 } 00:17:22.001 } 00:17:22.001 ]' 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.001 14:59:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.001 14:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.001 14:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.262 14:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.262 14:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.262 14:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.262 14:59:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.206 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.468 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.729 00:17:23.729 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.729 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.729 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.990 { 00:17:23.990 "cntlid": 77, 00:17:23.990 "qid": 0, 00:17:23.990 "state": "enabled", 00:17:23.990 "thread": "nvmf_tgt_poll_group_000", 00:17:23.990 "listen_address": { 00:17:23.990 "trtype": "RDMA", 00:17:23.990 "adrfam": "IPv4", 00:17:23.990 "traddr": "192.168.100.8", 00:17:23.990 "trsvcid": "4420" 00:17:23.990 }, 00:17:23.990 "peer_address": { 00:17:23.990 "trtype": "RDMA", 00:17:23.990 "adrfam": "IPv4", 00:17:23.990 "traddr": "192.168.100.8", 00:17:23.990 "trsvcid": "33060" 00:17:23.990 }, 00:17:23.990 "auth": { 00:17:23.990 "state": "completed", 00:17:23.990 "digest": "sha384", 00:17:23.990 "dhgroup": "ffdhe4096" 00:17:23.990 } 00:17:23.990 } 00:17:23.990 ]' 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.990 14:59:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.249 14:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:25.189 14:59:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.189 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.189 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.189 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.189 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.189 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.189 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.189 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.449 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.709 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.709 { 00:17:25.709 "cntlid": 79, 00:17:25.709 "qid": 0, 00:17:25.709 "state": "enabled", 00:17:25.709 "thread": "nvmf_tgt_poll_group_000", 00:17:25.709 "listen_address": { 00:17:25.709 "trtype": "RDMA", 00:17:25.709 "adrfam": "IPv4", 00:17:25.709 "traddr": "192.168.100.8", 00:17:25.709 "trsvcid": "4420" 00:17:25.709 }, 00:17:25.709 "peer_address": { 00:17:25.709 "trtype": "RDMA", 00:17:25.709 "adrfam": "IPv4", 00:17:25.709 "traddr": "192.168.100.8", 00:17:25.709 "trsvcid": "57221" 00:17:25.709 }, 00:17:25.709 "auth": { 00:17:25.709 "state": "completed", 00:17:25.709 "digest": "sha384", 00:17:25.709 "dhgroup": "ffdhe4096" 00:17:25.709 } 00:17:25.709 } 00:17:25.709 ]' 00:17:25.709 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.970 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.970 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.970 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.970 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.970 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.970 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.970 14:59:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.230 14:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:26.801 14:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.061 14:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.061 14:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.061 14:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.061 14:59:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.061 14:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.061 14:59:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.061 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.061 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.321 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.581 00:17:27.581 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.581 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.581 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.840 { 00:17:27.840 "cntlid": 81, 00:17:27.840 "qid": 0, 00:17:27.840 "state": "enabled", 00:17:27.840 "thread": "nvmf_tgt_poll_group_000", 00:17:27.840 "listen_address": { 00:17:27.840 "trtype": "RDMA", 00:17:27.840 "adrfam": "IPv4", 00:17:27.840 "traddr": "192.168.100.8", 00:17:27.840 "trsvcid": "4420" 00:17:27.840 }, 00:17:27.840 "peer_address": { 00:17:27.840 "trtype": "RDMA", 00:17:27.840 "adrfam": "IPv4", 00:17:27.840 "traddr": "192.168.100.8", 00:17:27.840 "trsvcid": "58343" 00:17:27.840 }, 00:17:27.840 "auth": { 00:17:27.840 "state": "completed", 00:17:27.840 "digest": "sha384", 00:17:27.840 "dhgroup": "ffdhe6144" 00:17:27.840 } 00:17:27.840 } 00:17:27.840 ]' 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.840 14:59:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.100 14:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.037 14:59:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.295 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.554 00:17:29.554 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.554 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.554 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.813 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.813 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.813 14:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.813 14:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.813 14:59:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.814 { 00:17:29.814 "cntlid": 83, 00:17:29.814 "qid": 0, 00:17:29.814 "state": "enabled", 00:17:29.814 "thread": "nvmf_tgt_poll_group_000", 00:17:29.814 "listen_address": { 00:17:29.814 "trtype": "RDMA", 00:17:29.814 "adrfam": "IPv4", 00:17:29.814 "traddr": "192.168.100.8", 00:17:29.814 "trsvcid": "4420" 00:17:29.814 }, 00:17:29.814 "peer_address": { 00:17:29.814 "trtype": "RDMA", 00:17:29.814 "adrfam": "IPv4", 00:17:29.814 "traddr": "192.168.100.8", 00:17:29.814 "trsvcid": "48998" 00:17:29.814 }, 00:17:29.814 "auth": { 00:17:29.814 "state": "completed", 00:17:29.814 "digest": "sha384", 00:17:29.814 "dhgroup": "ffdhe6144" 00:17:29.814 } 00:17:29.814 } 00:17:29.814 ]' 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.814 14:59:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.073 14:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.014 14:59:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.276 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.537 00:17:31.537 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.537 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.537 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.798 { 00:17:31.798 "cntlid": 85, 00:17:31.798 "qid": 0, 00:17:31.798 "state": "enabled", 00:17:31.798 "thread": "nvmf_tgt_poll_group_000", 00:17:31.798 "listen_address": { 00:17:31.798 "trtype": "RDMA", 00:17:31.798 "adrfam": "IPv4", 00:17:31.798 "traddr": "192.168.100.8", 00:17:31.798 "trsvcid": "4420" 00:17:31.798 }, 00:17:31.798 "peer_address": { 00:17:31.798 "trtype": "RDMA", 00:17:31.798 "adrfam": "IPv4", 00:17:31.798 "traddr": "192.168.100.8", 00:17:31.798 "trsvcid": "60830" 00:17:31.798 }, 00:17:31.798 "auth": { 00:17:31.798 "state": "completed", 00:17:31.798 "digest": "sha384", 00:17:31.798 "dhgroup": "ffdhe6144" 00:17:31.798 } 00:17:31.798 } 00:17:31.798 ]' 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.798 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.060 14:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.002 14:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.002 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.573 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.573 { 00:17:33.573 "cntlid": 87, 00:17:33.573 "qid": 0, 00:17:33.573 "state": "enabled", 00:17:33.573 "thread": "nvmf_tgt_poll_group_000", 00:17:33.573 "listen_address": { 00:17:33.573 "trtype": "RDMA", 00:17:33.573 "adrfam": "IPv4", 00:17:33.573 "traddr": "192.168.100.8", 00:17:33.573 "trsvcid": "4420" 00:17:33.573 }, 00:17:33.573 "peer_address": { 00:17:33.573 "trtype": "RDMA", 00:17:33.573 "adrfam": "IPv4", 00:17:33.573 "traddr": "192.168.100.8", 00:17:33.573 "trsvcid": "55212" 00:17:33.573 }, 00:17:33.573 "auth": { 00:17:33.573 "state": "completed", 00:17:33.573 "digest": "sha384", 00:17:33.573 "dhgroup": "ffdhe6144" 00:17:33.573 } 00:17:33.573 } 00:17:33.573 ]' 00:17:33.573 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.834 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.834 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.834 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.834 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.834 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.834 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.834 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.095 14:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:34.665 14:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.926 14:59:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.187 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.759 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.759 { 00:17:35.759 "cntlid": 89, 00:17:35.759 "qid": 0, 00:17:35.759 "state": "enabled", 00:17:35.759 "thread": "nvmf_tgt_poll_group_000", 00:17:35.759 "listen_address": { 00:17:35.759 "trtype": "RDMA", 00:17:35.759 "adrfam": "IPv4", 00:17:35.759 "traddr": "192.168.100.8", 00:17:35.759 "trsvcid": "4420" 00:17:35.759 }, 00:17:35.759 "peer_address": { 00:17:35.759 "trtype": "RDMA", 00:17:35.759 "adrfam": "IPv4", 00:17:35.759 "traddr": "192.168.100.8", 00:17:35.759 "trsvcid": "38412" 00:17:35.759 }, 00:17:35.759 "auth": { 00:17:35.759 "state": "completed", 00:17:35.759 "digest": "sha384", 00:17:35.759 "dhgroup": "ffdhe8192" 00:17:35.759 } 00:17:35.759 } 00:17:35.759 ]' 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.759 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.020 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.020 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.020 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.020 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.020 14:59:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.020 14:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:17:36.962 14:59:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.962 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.962 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.962 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.222 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.791 00:17:37.791 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.791 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.791 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.050 { 00:17:38.050 "cntlid": 91, 00:17:38.050 "qid": 0, 00:17:38.050 "state": "enabled", 00:17:38.050 "thread": "nvmf_tgt_poll_group_000", 00:17:38.050 "listen_address": { 00:17:38.050 "trtype": "RDMA", 00:17:38.050 "adrfam": "IPv4", 00:17:38.050 "traddr": "192.168.100.8", 00:17:38.050 "trsvcid": "4420" 00:17:38.050 }, 00:17:38.050 "peer_address": { 00:17:38.050 "trtype": "RDMA", 00:17:38.050 "adrfam": "IPv4", 00:17:38.050 "traddr": "192.168.100.8", 00:17:38.050 "trsvcid": "55874" 00:17:38.050 }, 00:17:38.050 "auth": { 00:17:38.050 "state": "completed", 00:17:38.050 "digest": "sha384", 00:17:38.050 "dhgroup": "ffdhe8192" 00:17:38.050 } 00:17:38.050 } 00:17:38.050 ]' 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.050 14:59:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.050 14:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.050 14:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.050 14:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.311 14:59:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.251 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.512 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.772 00:17:40.032 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.032 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.032 14:59:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.032 { 00:17:40.032 "cntlid": 93, 00:17:40.032 "qid": 0, 00:17:40.032 "state": "enabled", 00:17:40.032 "thread": "nvmf_tgt_poll_group_000", 00:17:40.032 "listen_address": { 00:17:40.032 "trtype": "RDMA", 00:17:40.032 "adrfam": "IPv4", 00:17:40.032 "traddr": "192.168.100.8", 00:17:40.032 "trsvcid": "4420" 00:17:40.032 }, 00:17:40.032 "peer_address": { 00:17:40.032 "trtype": "RDMA", 00:17:40.032 "adrfam": "IPv4", 00:17:40.032 "traddr": "192.168.100.8", 00:17:40.032 "trsvcid": "36307" 00:17:40.032 }, 00:17:40.032 "auth": { 00:17:40.032 "state": "completed", 00:17:40.032 "digest": "sha384", 00:17:40.032 "dhgroup": "ffdhe8192" 00:17:40.032 } 00:17:40.032 } 00:17:40.032 ]' 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.032 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.293 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.293 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.293 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.293 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.293 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.293 14:59:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:41.314 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.314 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.314 14:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.314 14:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.314 14:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.315 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.315 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.315 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.575 14:59:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.576 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.576 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.146 00:17:42.146 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.146 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.146 14:59:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.146 { 00:17:42.146 "cntlid": 95, 00:17:42.146 "qid": 0, 00:17:42.146 "state": "enabled", 00:17:42.146 "thread": "nvmf_tgt_poll_group_000", 00:17:42.146 "listen_address": { 00:17:42.146 "trtype": "RDMA", 00:17:42.146 "adrfam": "IPv4", 00:17:42.146 "traddr": "192.168.100.8", 00:17:42.146 "trsvcid": "4420" 00:17:42.146 }, 00:17:42.146 "peer_address": { 00:17:42.146 "trtype": "RDMA", 00:17:42.146 "adrfam": "IPv4", 00:17:42.146 "traddr": "192.168.100.8", 00:17:42.146 "trsvcid": "41283" 00:17:42.146 }, 00:17:42.146 "auth": { 00:17:42.146 "state": "completed", 00:17:42.146 "digest": "sha384", 00:17:42.146 "dhgroup": "ffdhe8192" 00:17:42.146 } 00:17:42.146 } 00:17:42.146 ]' 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.146 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.406 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.406 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.406 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.406 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.406 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.406 14:59:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:43.347 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.608 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.870 00:17:43.870 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.870 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.870 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.130 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.130 14:59:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.130 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.130 14:59:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.130 { 00:17:44.130 "cntlid": 97, 00:17:44.130 "qid": 0, 00:17:44.130 "state": "enabled", 00:17:44.130 "thread": "nvmf_tgt_poll_group_000", 00:17:44.130 "listen_address": { 00:17:44.130 "trtype": "RDMA", 00:17:44.130 "adrfam": "IPv4", 00:17:44.130 "traddr": "192.168.100.8", 00:17:44.130 "trsvcid": "4420" 00:17:44.130 }, 00:17:44.130 "peer_address": { 00:17:44.130 "trtype": "RDMA", 00:17:44.130 "adrfam": "IPv4", 00:17:44.130 "traddr": "192.168.100.8", 00:17:44.130 "trsvcid": "39926" 00:17:44.130 }, 00:17:44.130 "auth": { 00:17:44.130 "state": "completed", 00:17:44.130 "digest": "sha512", 00:17:44.130 "dhgroup": "null" 00:17:44.130 } 00:17:44.130 } 00:17:44.130 ]' 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.130 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.390 15:00:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.331 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.592 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.592 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.851 { 00:17:45.851 "cntlid": 99, 00:17:45.851 "qid": 0, 00:17:45.851 "state": "enabled", 00:17:45.851 "thread": "nvmf_tgt_poll_group_000", 00:17:45.851 "listen_address": { 00:17:45.851 "trtype": "RDMA", 00:17:45.851 "adrfam": "IPv4", 00:17:45.851 "traddr": "192.168.100.8", 00:17:45.851 "trsvcid": "4420" 00:17:45.851 }, 00:17:45.851 "peer_address": { 00:17:45.851 "trtype": "RDMA", 00:17:45.851 "adrfam": "IPv4", 00:17:45.851 "traddr": "192.168.100.8", 00:17:45.851 "trsvcid": "50877" 00:17:45.851 }, 00:17:45.851 "auth": { 00:17:45.851 "state": "completed", 00:17:45.851 "digest": "sha512", 00:17:45.851 "dhgroup": "null" 00:17:45.851 } 00:17:45.851 } 00:17:45.851 ]' 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.851 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.109 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.109 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.109 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.109 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.109 15:00:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.109 15:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:47.045 15:00:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.306 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.565 00:17:47.565 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.565 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.565 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.827 { 00:17:47.827 "cntlid": 101, 00:17:47.827 "qid": 0, 00:17:47.827 "state": "enabled", 00:17:47.827 "thread": "nvmf_tgt_poll_group_000", 00:17:47.827 "listen_address": { 00:17:47.827 "trtype": "RDMA", 00:17:47.827 "adrfam": "IPv4", 00:17:47.827 "traddr": "192.168.100.8", 00:17:47.827 "trsvcid": "4420" 00:17:47.827 }, 00:17:47.827 "peer_address": { 00:17:47.827 "trtype": "RDMA", 00:17:47.827 "adrfam": "IPv4", 00:17:47.827 "traddr": "192.168.100.8", 00:17:47.827 "trsvcid": "49412" 00:17:47.827 }, 00:17:47.827 "auth": { 00:17:47.827 "state": "completed", 00:17:47.827 "digest": "sha512", 00:17:47.827 "dhgroup": "null" 00:17:47.827 } 00:17:47.827 } 00:17:47.827 ]' 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.827 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.086 15:00:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.033 15:00:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.033 15:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.293 15:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.293 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.293 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.293 00:17:49.293 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.293 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.293 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.554 { 00:17:49.554 "cntlid": 103, 00:17:49.554 "qid": 0, 00:17:49.554 "state": "enabled", 00:17:49.554 "thread": "nvmf_tgt_poll_group_000", 00:17:49.554 "listen_address": { 00:17:49.554 "trtype": "RDMA", 00:17:49.554 "adrfam": "IPv4", 00:17:49.554 "traddr": "192.168.100.8", 00:17:49.554 "trsvcid": "4420" 00:17:49.554 }, 00:17:49.554 "peer_address": { 00:17:49.554 "trtype": "RDMA", 00:17:49.554 "adrfam": "IPv4", 00:17:49.554 "traddr": "192.168.100.8", 00:17:49.554 "trsvcid": "45130" 00:17:49.554 }, 00:17:49.554 "auth": { 00:17:49.554 "state": "completed", 00:17:49.554 "digest": "sha512", 00:17:49.554 "dhgroup": "null" 00:17:49.554 } 00:17:49.554 } 00:17:49.554 ]' 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:49.554 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.820 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.820 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.820 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.820 15:00:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.766 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.025 15:00:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.285 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.285 { 00:17:51.285 "cntlid": 105, 00:17:51.285 "qid": 0, 00:17:51.285 "state": "enabled", 00:17:51.285 "thread": "nvmf_tgt_poll_group_000", 00:17:51.285 "listen_address": { 00:17:51.285 "trtype": "RDMA", 00:17:51.285 "adrfam": "IPv4", 00:17:51.285 "traddr": "192.168.100.8", 00:17:51.285 "trsvcid": "4420" 00:17:51.285 }, 00:17:51.285 "peer_address": { 00:17:51.285 "trtype": "RDMA", 00:17:51.285 "adrfam": "IPv4", 00:17:51.285 "traddr": "192.168.100.8", 00:17:51.285 "trsvcid": "40785" 00:17:51.285 }, 00:17:51.285 "auth": { 00:17:51.285 "state": "completed", 00:17:51.285 "digest": "sha512", 00:17:51.285 "dhgroup": "ffdhe2048" 00:17:51.285 } 00:17:51.285 } 00:17:51.285 ]' 00:17:51.285 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.545 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.545 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.545 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.545 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.545 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.545 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.545 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.805 15:00:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.744 15:00:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.003 00:17:53.003 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.003 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.003 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.262 { 00:17:53.262 "cntlid": 107, 00:17:53.262 "qid": 0, 00:17:53.262 "state": "enabled", 00:17:53.262 "thread": "nvmf_tgt_poll_group_000", 00:17:53.262 "listen_address": { 00:17:53.262 "trtype": "RDMA", 00:17:53.262 "adrfam": "IPv4", 00:17:53.262 "traddr": "192.168.100.8", 00:17:53.262 "trsvcid": "4420" 00:17:53.262 }, 00:17:53.262 "peer_address": { 00:17:53.262 "trtype": "RDMA", 00:17:53.262 "adrfam": "IPv4", 00:17:53.262 "traddr": "192.168.100.8", 00:17:53.262 "trsvcid": "53349" 00:17:53.262 }, 00:17:53.262 "auth": { 00:17:53.262 "state": "completed", 00:17:53.262 "digest": "sha512", 00:17:53.262 "dhgroup": "ffdhe2048" 00:17:53.262 } 00:17:53.262 } 00:17:53.262 ]' 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.262 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.263 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.522 15:00:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.460 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.720 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.980 00:17:54.980 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.980 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.980 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.980 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.980 15:00:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.980 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.980 15:00:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 15:00:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.980 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.980 { 00:17:54.980 "cntlid": 109, 00:17:54.980 "qid": 0, 00:17:54.980 "state": "enabled", 00:17:54.980 "thread": "nvmf_tgt_poll_group_000", 00:17:54.980 "listen_address": { 00:17:54.980 "trtype": "RDMA", 00:17:54.980 "adrfam": "IPv4", 00:17:54.980 "traddr": "192.168.100.8", 00:17:54.980 "trsvcid": "4420" 00:17:54.980 }, 00:17:54.980 "peer_address": { 00:17:54.980 "trtype": "RDMA", 00:17:54.980 "adrfam": "IPv4", 00:17:54.980 "traddr": "192.168.100.8", 00:17:54.980 "trsvcid": "46129" 00:17:54.980 }, 00:17:54.980 "auth": { 00:17:54.980 "state": "completed", 00:17:54.980 "digest": "sha512", 00:17:54.980 "dhgroup": "ffdhe2048" 00:17:54.980 } 00:17:54.980 } 00:17:54.980 ]' 00:17:54.980 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.240 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.240 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.240 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.240 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.240 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.240 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.240 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.499 15:00:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:17:56.067 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.326 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.326 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.326 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.326 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.326 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.326 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.326 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.586 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.586 00:17:56.845 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.846 { 00:17:56.846 "cntlid": 111, 00:17:56.846 "qid": 0, 00:17:56.846 "state": "enabled", 00:17:56.846 "thread": "nvmf_tgt_poll_group_000", 00:17:56.846 "listen_address": { 00:17:56.846 "trtype": "RDMA", 00:17:56.846 "adrfam": "IPv4", 00:17:56.846 "traddr": "192.168.100.8", 00:17:56.846 "trsvcid": "4420" 00:17:56.846 }, 00:17:56.846 "peer_address": { 00:17:56.846 "trtype": "RDMA", 00:17:56.846 "adrfam": "IPv4", 00:17:56.846 "traddr": "192.168.100.8", 00:17:56.846 "trsvcid": "33034" 00:17:56.846 }, 00:17:56.846 "auth": { 00:17:56.846 "state": "completed", 00:17:56.846 "digest": "sha512", 00:17:56.846 "dhgroup": "ffdhe2048" 00:17:56.846 } 00:17:56.846 } 00:17:56.846 ]' 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.846 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.105 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.105 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.105 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.105 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.105 15:00:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.105 15:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:17:58.043 15:00:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.043 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.303 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.564 00:17:58.564 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.564 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.564 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.824 { 00:17:58.824 "cntlid": 113, 00:17:58.824 "qid": 0, 00:17:58.824 "state": "enabled", 00:17:58.824 "thread": "nvmf_tgt_poll_group_000", 00:17:58.824 "listen_address": { 00:17:58.824 "trtype": "RDMA", 00:17:58.824 "adrfam": "IPv4", 00:17:58.824 "traddr": "192.168.100.8", 00:17:58.824 "trsvcid": "4420" 00:17:58.824 }, 00:17:58.824 "peer_address": { 00:17:58.824 "trtype": "RDMA", 00:17:58.824 "adrfam": "IPv4", 00:17:58.824 "traddr": "192.168.100.8", 00:17:58.824 "trsvcid": "44712" 00:17:58.824 }, 00:17:58.824 "auth": { 00:17:58.824 "state": "completed", 00:17:58.824 "digest": "sha512", 00:17:58.824 "dhgroup": "ffdhe3072" 00:17:58.824 } 00:17:58.824 } 00:17:58.824 ]' 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.824 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.084 15:00:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.022 15:00:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.022 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.281 00:18:00.281 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.281 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.282 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.541 { 00:18:00.541 "cntlid": 115, 00:18:00.541 "qid": 0, 00:18:00.541 "state": "enabled", 00:18:00.541 "thread": "nvmf_tgt_poll_group_000", 00:18:00.541 "listen_address": { 00:18:00.541 "trtype": "RDMA", 00:18:00.541 "adrfam": "IPv4", 00:18:00.541 "traddr": "192.168.100.8", 00:18:00.541 "trsvcid": "4420" 00:18:00.541 }, 00:18:00.541 "peer_address": { 00:18:00.541 "trtype": "RDMA", 00:18:00.541 "adrfam": "IPv4", 00:18:00.541 "traddr": "192.168.100.8", 00:18:00.541 "trsvcid": "46546" 00:18:00.541 }, 00:18:00.541 "auth": { 00:18:00.541 "state": "completed", 00:18:00.541 "digest": "sha512", 00:18:00.541 "dhgroup": "ffdhe3072" 00:18:00.541 } 00:18:00.541 } 00:18:00.541 ]' 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.541 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.801 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.801 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.801 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.801 15:00:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.741 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.000 15:00:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.262 00:18:02.262 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.262 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.262 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.522 { 00:18:02.522 "cntlid": 117, 00:18:02.522 "qid": 0, 00:18:02.522 "state": "enabled", 00:18:02.522 "thread": "nvmf_tgt_poll_group_000", 00:18:02.522 "listen_address": { 00:18:02.522 "trtype": "RDMA", 00:18:02.522 "adrfam": "IPv4", 00:18:02.522 "traddr": "192.168.100.8", 00:18:02.522 "trsvcid": "4420" 00:18:02.522 }, 00:18:02.522 "peer_address": { 00:18:02.522 "trtype": "RDMA", 00:18:02.522 "adrfam": "IPv4", 00:18:02.522 "traddr": "192.168.100.8", 00:18:02.522 "trsvcid": "50135" 00:18:02.522 }, 00:18:02.522 "auth": { 00:18:02.522 "state": "completed", 00:18:02.522 "digest": "sha512", 00:18:02.522 "dhgroup": "ffdhe3072" 00:18:02.522 } 00:18:02.522 } 00:18:02.522 ]' 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.522 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.781 15:00:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.719 15:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.978 15:00:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.978 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.978 15:00:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.978 00:18:03.978 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.978 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.978 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.240 { 00:18:04.240 "cntlid": 119, 00:18:04.240 "qid": 0, 00:18:04.240 "state": "enabled", 00:18:04.240 "thread": "nvmf_tgt_poll_group_000", 00:18:04.240 "listen_address": { 00:18:04.240 "trtype": "RDMA", 00:18:04.240 "adrfam": "IPv4", 00:18:04.240 "traddr": "192.168.100.8", 00:18:04.240 "trsvcid": "4420" 00:18:04.240 }, 00:18:04.240 "peer_address": { 00:18:04.240 "trtype": "RDMA", 00:18:04.240 "adrfam": "IPv4", 00:18:04.240 "traddr": "192.168.100.8", 00:18:04.240 "trsvcid": "60260" 00:18:04.240 }, 00:18:04.240 "auth": { 00:18:04.240 "state": "completed", 00:18:04.240 "digest": "sha512", 00:18:04.240 "dhgroup": "ffdhe3072" 00:18:04.240 } 00:18:04.240 } 00:18:04.240 ]' 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.240 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.504 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.504 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.504 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.504 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.504 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.504 15:00:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.444 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.705 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.965 00:18:05.965 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.965 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.965 15:00:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.224 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.224 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.224 15:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.224 15:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.224 15:00:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.224 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.224 { 00:18:06.224 "cntlid": 121, 00:18:06.224 "qid": 0, 00:18:06.224 "state": "enabled", 00:18:06.225 "thread": "nvmf_tgt_poll_group_000", 00:18:06.225 "listen_address": { 00:18:06.225 "trtype": "RDMA", 00:18:06.225 "adrfam": "IPv4", 00:18:06.225 "traddr": "192.168.100.8", 00:18:06.225 "trsvcid": "4420" 00:18:06.225 }, 00:18:06.225 "peer_address": { 00:18:06.225 "trtype": "RDMA", 00:18:06.225 "adrfam": "IPv4", 00:18:06.225 "traddr": "192.168.100.8", 00:18:06.225 "trsvcid": "33799" 00:18:06.225 }, 00:18:06.225 "auth": { 00:18:06.225 "state": "completed", 00:18:06.225 "digest": "sha512", 00:18:06.225 "dhgroup": "ffdhe4096" 00:18:06.225 } 00:18:06.225 } 00:18:06.225 ]' 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.225 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.484 15:00:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.428 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.688 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.688 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.688 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.688 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.948 { 00:18:07.948 "cntlid": 123, 00:18:07.948 "qid": 0, 00:18:07.948 "state": "enabled", 00:18:07.948 "thread": "nvmf_tgt_poll_group_000", 00:18:07.948 "listen_address": { 00:18:07.948 "trtype": "RDMA", 00:18:07.948 "adrfam": "IPv4", 00:18:07.948 "traddr": "192.168.100.8", 00:18:07.948 "trsvcid": "4420" 00:18:07.948 }, 00:18:07.948 "peer_address": { 00:18:07.948 "trtype": "RDMA", 00:18:07.948 "adrfam": "IPv4", 00:18:07.948 "traddr": "192.168.100.8", 00:18:07.948 "trsvcid": "48521" 00:18:07.948 }, 00:18:07.948 "auth": { 00:18:07.948 "state": "completed", 00:18:07.948 "digest": "sha512", 00:18:07.948 "dhgroup": "ffdhe4096" 00:18:07.948 } 00:18:07.948 } 00:18:07.948 ]' 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.948 15:00:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.948 15:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.209 15:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.209 15:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.209 15:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.209 15:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.209 15:00:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.147 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.407 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.408 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.408 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.408 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.668 00:18:09.668 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.668 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.668 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.928 { 00:18:09.928 "cntlid": 125, 00:18:09.928 "qid": 0, 00:18:09.928 "state": "enabled", 00:18:09.928 "thread": "nvmf_tgt_poll_group_000", 00:18:09.928 "listen_address": { 00:18:09.928 "trtype": "RDMA", 00:18:09.928 "adrfam": "IPv4", 00:18:09.928 "traddr": "192.168.100.8", 00:18:09.928 "trsvcid": "4420" 00:18:09.928 }, 00:18:09.928 "peer_address": { 00:18:09.928 "trtype": "RDMA", 00:18:09.928 "adrfam": "IPv4", 00:18:09.928 "traddr": "192.168.100.8", 00:18:09.928 "trsvcid": "59325" 00:18:09.928 }, 00:18:09.928 "auth": { 00:18:09.928 "state": "completed", 00:18:09.928 "digest": "sha512", 00:18:09.928 "dhgroup": "ffdhe4096" 00:18:09.928 } 00:18:09.928 } 00:18:09.928 ]' 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.928 15:00:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.188 15:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:18:11.135 15:00:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.135 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.470 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.470 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.470 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.470 00:18:11.470 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.470 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.470 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.763 { 00:18:11.763 "cntlid": 127, 00:18:11.763 "qid": 0, 00:18:11.763 "state": "enabled", 00:18:11.763 "thread": "nvmf_tgt_poll_group_000", 00:18:11.763 "listen_address": { 00:18:11.763 "trtype": "RDMA", 00:18:11.763 "adrfam": "IPv4", 00:18:11.763 "traddr": "192.168.100.8", 00:18:11.763 "trsvcid": "4420" 00:18:11.763 }, 00:18:11.763 "peer_address": { 00:18:11.763 "trtype": "RDMA", 00:18:11.763 "adrfam": "IPv4", 00:18:11.763 "traddr": "192.168.100.8", 00:18:11.763 "trsvcid": "40365" 00:18:11.763 }, 00:18:11.763 "auth": { 00:18:11.763 "state": "completed", 00:18:11.763 "digest": "sha512", 00:18:11.763 "dhgroup": "ffdhe4096" 00:18:11.763 } 00:18:11.763 } 00:18:11.763 ]' 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.763 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.024 15:00:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.964 15:00:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.225 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.486 00:18:13.486 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.486 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.486 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.745 { 00:18:13.745 "cntlid": 129, 00:18:13.745 "qid": 0, 00:18:13.745 "state": "enabled", 00:18:13.745 "thread": "nvmf_tgt_poll_group_000", 00:18:13.745 "listen_address": { 00:18:13.745 "trtype": "RDMA", 00:18:13.745 "adrfam": "IPv4", 00:18:13.745 "traddr": "192.168.100.8", 00:18:13.745 "trsvcid": "4420" 00:18:13.745 }, 00:18:13.745 "peer_address": { 00:18:13.745 "trtype": "RDMA", 00:18:13.745 "adrfam": "IPv4", 00:18:13.745 "traddr": "192.168.100.8", 00:18:13.745 "trsvcid": "53287" 00:18:13.745 }, 00:18:13.745 "auth": { 00:18:13.745 "state": "completed", 00:18:13.745 "digest": "sha512", 00:18:13.745 "dhgroup": "ffdhe6144" 00:18:13.745 } 00:18:13.745 } 00:18:13.745 ]' 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.745 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.006 15:00:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.976 15:00:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.236 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.496 00:18:15.496 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.496 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.496 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.755 { 00:18:15.755 "cntlid": 131, 00:18:15.755 "qid": 0, 00:18:15.755 "state": "enabled", 00:18:15.755 "thread": "nvmf_tgt_poll_group_000", 00:18:15.755 "listen_address": { 00:18:15.755 "trtype": "RDMA", 00:18:15.755 "adrfam": "IPv4", 00:18:15.755 "traddr": "192.168.100.8", 00:18:15.755 "trsvcid": "4420" 00:18:15.755 }, 00:18:15.755 "peer_address": { 00:18:15.755 "trtype": "RDMA", 00:18:15.755 "adrfam": "IPv4", 00:18:15.755 "traddr": "192.168.100.8", 00:18:15.755 "trsvcid": "52624" 00:18:15.755 }, 00:18:15.755 "auth": { 00:18:15.755 "state": "completed", 00:18:15.755 "digest": "sha512", 00:18:15.755 "dhgroup": "ffdhe6144" 00:18:15.755 } 00:18:15.755 } 00:18:15.755 ]' 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.755 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.014 15:00:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.952 15:00:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.212 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.471 00:18:17.471 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.471 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.471 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.731 { 00:18:17.731 "cntlid": 133, 00:18:17.731 "qid": 0, 00:18:17.731 "state": "enabled", 00:18:17.731 "thread": "nvmf_tgt_poll_group_000", 00:18:17.731 "listen_address": { 00:18:17.731 "trtype": "RDMA", 00:18:17.731 "adrfam": "IPv4", 00:18:17.731 "traddr": "192.168.100.8", 00:18:17.731 "trsvcid": "4420" 00:18:17.731 }, 00:18:17.731 "peer_address": { 00:18:17.731 "trtype": "RDMA", 00:18:17.731 "adrfam": "IPv4", 00:18:17.731 "traddr": "192.168.100.8", 00:18:17.731 "trsvcid": "51740" 00:18:17.731 }, 00:18:17.731 "auth": { 00:18:17.731 "state": "completed", 00:18:17.731 "digest": "sha512", 00:18:17.731 "dhgroup": "ffdhe6144" 00:18:17.731 } 00:18:17.731 } 00:18:17.731 ]' 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.731 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.990 15:00:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.940 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.200 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:19.200 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.200 15:00:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.200 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.459 00:18:19.459 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.459 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.459 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.719 { 00:18:19.719 "cntlid": 135, 00:18:19.719 "qid": 0, 00:18:19.719 "state": "enabled", 00:18:19.719 "thread": "nvmf_tgt_poll_group_000", 00:18:19.719 "listen_address": { 00:18:19.719 "trtype": "RDMA", 00:18:19.719 "adrfam": "IPv4", 00:18:19.719 "traddr": "192.168.100.8", 00:18:19.719 "trsvcid": "4420" 00:18:19.719 }, 00:18:19.719 "peer_address": { 00:18:19.719 "trtype": "RDMA", 00:18:19.719 "adrfam": "IPv4", 00:18:19.719 "traddr": "192.168.100.8", 00:18:19.719 "trsvcid": "58691" 00:18:19.719 }, 00:18:19.719 "auth": { 00:18:19.719 "state": "completed", 00:18:19.719 "digest": "sha512", 00:18:19.719 "dhgroup": "ffdhe6144" 00:18:19.719 } 00:18:19.719 } 00:18:19.719 ]' 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.719 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.978 15:00:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.924 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.183 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.184 15:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.184 15:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.184 15:00:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.184 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.184 15:00:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.751 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.751 { 00:18:21.751 "cntlid": 137, 00:18:21.751 "qid": 0, 00:18:21.751 "state": "enabled", 00:18:21.751 "thread": "nvmf_tgt_poll_group_000", 00:18:21.751 "listen_address": { 00:18:21.751 "trtype": "RDMA", 00:18:21.751 "adrfam": "IPv4", 00:18:21.751 "traddr": "192.168.100.8", 00:18:21.751 "trsvcid": "4420" 00:18:21.751 }, 00:18:21.751 "peer_address": { 00:18:21.751 "trtype": "RDMA", 00:18:21.751 "adrfam": "IPv4", 00:18:21.751 "traddr": "192.168.100.8", 00:18:21.751 "trsvcid": "41042" 00:18:21.751 }, 00:18:21.751 "auth": { 00:18:21.751 "state": "completed", 00:18:21.751 "digest": "sha512", 00:18:21.751 "dhgroup": "ffdhe8192" 00:18:21.751 } 00:18:21.751 } 00:18:21.751 ]' 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.751 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.011 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.011 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.011 15:00:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.011 15:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.964 15:00:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.224 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.225 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.794 00:18:23.794 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.794 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.794 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.054 { 00:18:24.054 "cntlid": 139, 00:18:24.054 "qid": 0, 00:18:24.054 "state": "enabled", 00:18:24.054 "thread": "nvmf_tgt_poll_group_000", 00:18:24.054 "listen_address": { 00:18:24.054 "trtype": "RDMA", 00:18:24.054 "adrfam": "IPv4", 00:18:24.054 "traddr": "192.168.100.8", 00:18:24.054 "trsvcid": "4420" 00:18:24.054 }, 00:18:24.054 "peer_address": { 00:18:24.054 "trtype": "RDMA", 00:18:24.054 "adrfam": "IPv4", 00:18:24.054 "traddr": "192.168.100.8", 00:18:24.054 "trsvcid": "53394" 00:18:24.054 }, 00:18:24.054 "auth": { 00:18:24.054 "state": "completed", 00:18:24.054 "digest": "sha512", 00:18:24.054 "dhgroup": "ffdhe8192" 00:18:24.054 } 00:18:24.054 } 00:18:24.054 ]' 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.054 15:00:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.313 15:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:OTIxZTliZDNjMjhkY2QzODNjM2MyOWY1NjBkZGJiZjMdwdXC: --dhchap-ctrl-secret DHHC-1:02:NDY4ZTM0NjAxYjMyMTdmYTkxN2ZhNGZhMjFiZjZkMDkyNjc0NzE5YjQwZmUwMWE03vo1EQ==: 00:18:25.276 15:00:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.276 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.845 00:18:25.845 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.845 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.845 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.105 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.105 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.105 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.105 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.105 15:00:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.105 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.105 { 00:18:26.105 "cntlid": 141, 00:18:26.105 "qid": 0, 00:18:26.105 "state": "enabled", 00:18:26.105 "thread": "nvmf_tgt_poll_group_000", 00:18:26.105 "listen_address": { 00:18:26.105 "trtype": "RDMA", 00:18:26.105 "adrfam": "IPv4", 00:18:26.105 "traddr": "192.168.100.8", 00:18:26.105 "trsvcid": "4420" 00:18:26.105 }, 00:18:26.105 "peer_address": { 00:18:26.105 "trtype": "RDMA", 00:18:26.105 "adrfam": "IPv4", 00:18:26.105 "traddr": "192.168.100.8", 00:18:26.105 "trsvcid": "41553" 00:18:26.105 }, 00:18:26.105 "auth": { 00:18:26.105 "state": "completed", 00:18:26.105 "digest": "sha512", 00:18:26.105 "dhgroup": "ffdhe8192" 00:18:26.105 } 00:18:26.105 } 00:18:26.105 ]' 00:18:26.105 15:00:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.105 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.105 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.105 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.105 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.105 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.105 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.105 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.366 15:00:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Nzk0MzViODFlMDc4NThjODNkNmFjMGI4MWQ2YTY1MWYzODkyNGRkOThmODUwYmNmhIUkMg==: --dhchap-ctrl-secret DHHC-1:01:NjQ2MDIzYjU1YTdhZjkyZDJiZDg5NDMzMzMxYmM3MmPFbopf: 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.306 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.566 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.135 00:18:28.135 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.135 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.135 15:00:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.135 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.135 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.135 15:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.135 15:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.135 15:00:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.135 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.135 { 00:18:28.135 "cntlid": 143, 00:18:28.135 "qid": 0, 00:18:28.135 "state": "enabled", 00:18:28.135 "thread": "nvmf_tgt_poll_group_000", 00:18:28.135 "listen_address": { 00:18:28.135 "trtype": "RDMA", 00:18:28.135 "adrfam": "IPv4", 00:18:28.135 "traddr": "192.168.100.8", 00:18:28.135 "trsvcid": "4420" 00:18:28.135 }, 00:18:28.135 "peer_address": { 00:18:28.135 "trtype": "RDMA", 00:18:28.135 "adrfam": "IPv4", 00:18:28.135 "traddr": "192.168.100.8", 00:18:28.135 "trsvcid": "42175" 00:18:28.135 }, 00:18:28.135 "auth": { 00:18:28.135 "state": "completed", 00:18:28.135 "digest": "sha512", 00:18:28.135 "dhgroup": "ffdhe8192" 00:18:28.135 } 00:18:28.135 } 00:18:28.135 ]' 00:18:28.135 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.395 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.395 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.395 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.395 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.395 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.395 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.395 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.656 15:00:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:18:29.226 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.485 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.745 15:00:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.315 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.315 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.315 { 00:18:30.315 "cntlid": 145, 00:18:30.315 "qid": 0, 00:18:30.315 "state": "enabled", 00:18:30.316 "thread": "nvmf_tgt_poll_group_000", 00:18:30.316 "listen_address": { 00:18:30.316 "trtype": "RDMA", 00:18:30.316 "adrfam": "IPv4", 00:18:30.316 "traddr": "192.168.100.8", 00:18:30.316 "trsvcid": "4420" 00:18:30.316 }, 00:18:30.316 "peer_address": { 00:18:30.316 "trtype": "RDMA", 00:18:30.316 "adrfam": "IPv4", 00:18:30.316 "traddr": "192.168.100.8", 00:18:30.316 "trsvcid": "58296" 00:18:30.316 }, 00:18:30.316 "auth": { 00:18:30.316 "state": "completed", 00:18:30.316 "digest": "sha512", 00:18:30.316 "dhgroup": "ffdhe8192" 00:18:30.316 } 00:18:30.316 } 00:18:30.316 ]' 00:18:30.316 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.316 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.316 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.576 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.576 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.576 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.576 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.576 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.576 15:00:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MGUxODFkZWM3NWViZWM4ZTM2YmM3Nzc1YjI4YjM4M2MwYTM5NjE1ODI5M2U4YjBjOahxBw==: --dhchap-ctrl-secret DHHC-1:03:ZmYzNjM1MzE3NWRlZjc5MjBmMmVhMmMxNjJlNzk2YjRkZjU1ZjUzMzUwNjQ3N2M1ZDc4MzMyOTQyNWQ5M2ZiYRl8K/s=: 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.519 15:00:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:03.626 request: 00:19:03.626 { 00:19:03.626 "name": "nvme0", 00:19:03.626 "trtype": "rdma", 00:19:03.626 "traddr": "192.168.100.8", 00:19:03.626 "adrfam": "ipv4", 00:19:03.626 "trsvcid": "4420", 00:19:03.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:03.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:03.626 "prchk_reftag": false, 00:19:03.626 "prchk_guard": false, 00:19:03.626 "hdgst": false, 00:19:03.626 "ddgst": false, 00:19:03.626 "dhchap_key": "key2", 00:19:03.626 "method": "bdev_nvme_attach_controller", 00:19:03.626 "req_id": 1 00:19:03.626 } 00:19:03.626 Got JSON-RPC error response 00:19:03.626 response: 00:19:03.626 { 00:19:03.626 "code": -5, 00:19:03.626 "message": "Input/output error" 00:19:03.626 } 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:03.626 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:03.626 request: 00:19:03.626 { 00:19:03.626 "name": "nvme0", 00:19:03.626 "trtype": "rdma", 00:19:03.626 "traddr": "192.168.100.8", 00:19:03.626 "adrfam": "ipv4", 00:19:03.626 "trsvcid": "4420", 00:19:03.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:03.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:03.627 "prchk_reftag": false, 00:19:03.627 "prchk_guard": false, 00:19:03.627 "hdgst": false, 00:19:03.627 "ddgst": false, 00:19:03.627 "dhchap_key": "key1", 00:19:03.627 "dhchap_ctrlr_key": "ckey2", 00:19:03.627 "method": "bdev_nvme_attach_controller", 00:19:03.627 "req_id": 1 00:19:03.627 } 00:19:03.627 Got JSON-RPC error response 00:19:03.627 response: 00:19:03.627 { 00:19:03.627 "code": -5, 00:19:03.627 "message": "Input/output error" 00:19:03.627 } 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.627 15:01:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.787 request: 00:19:35.787 { 00:19:35.787 "name": "nvme0", 00:19:35.787 "trtype": "rdma", 00:19:35.787 "traddr": "192.168.100.8", 00:19:35.787 "adrfam": "ipv4", 00:19:35.787 "trsvcid": "4420", 00:19:35.787 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:35.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:35.787 "prchk_reftag": false, 00:19:35.787 "prchk_guard": false, 00:19:35.787 "hdgst": false, 00:19:35.787 "ddgst": false, 00:19:35.787 "dhchap_key": "key1", 00:19:35.787 "dhchap_ctrlr_key": "ckey1", 00:19:35.787 "method": "bdev_nvme_attach_controller", 00:19:35.787 "req_id": 1 00:19:35.787 } 00:19:35.787 Got JSON-RPC error response 00:19:35.787 response: 00:19:35.787 { 00:19:35.787 "code": -5, 00:19:35.787 "message": "Input/output error" 00:19:35.787 } 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1816577 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1816577 ']' 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1816577 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1816577 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1816577' 00:19:35.787 killing process with pid 1816577 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1816577 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1816577 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1857669 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1857669 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1857669 ']' 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.787 15:01:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1857669 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1857669 ']' 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.787 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.788 15:01:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.788 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.788 { 00:19:35.788 "cntlid": 1, 00:19:35.788 "qid": 0, 00:19:35.788 "state": "enabled", 00:19:35.788 "thread": "nvmf_tgt_poll_group_000", 00:19:35.788 "listen_address": { 00:19:35.788 "trtype": "RDMA", 00:19:35.788 "adrfam": "IPv4", 00:19:35.788 "traddr": "192.168.100.8", 00:19:35.788 "trsvcid": "4420" 00:19:35.788 }, 00:19:35.788 "peer_address": { 00:19:35.788 "trtype": "RDMA", 00:19:35.788 "adrfam": "IPv4", 00:19:35.788 "traddr": "192.168.100.8", 00:19:35.788 "trsvcid": "47360" 00:19:35.788 }, 00:19:35.788 "auth": { 00:19:35.788 "state": "completed", 00:19:35.788 "digest": "sha512", 00:19:35.788 "dhgroup": "ffdhe8192" 00:19:35.788 } 00:19:35.788 } 00:19:35.788 ]' 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.788 15:01:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MDRkMGM2NTIyODQ1ZjgyZDRmZDQ5NmJmZTg1NTgwZGRhYzFlNjlhZmY2NjRhMDYwMTg2ZDhlYTQ1MWE5ZTQ2OYII9NQ=: 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:36.778 15:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.039 15:01:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.152 request: 00:20:09.152 { 00:20:09.152 "name": "nvme0", 00:20:09.152 "trtype": "rdma", 00:20:09.152 "traddr": "192.168.100.8", 00:20:09.152 "adrfam": "ipv4", 00:20:09.152 "trsvcid": "4420", 00:20:09.152 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.152 "prchk_reftag": false, 00:20:09.152 "prchk_guard": false, 00:20:09.152 "hdgst": false, 00:20:09.152 "ddgst": false, 00:20:09.152 "dhchap_key": "key3", 00:20:09.152 "method": "bdev_nvme_attach_controller", 00:20:09.152 "req_id": 1 00:20:09.152 } 00:20:09.152 Got JSON-RPC error response 00:20:09.152 response: 00:20:09.152 { 00:20:09.152 "code": -5, 00:20:09.152 "message": "Input/output error" 00:20:09.152 } 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.152 15:02:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.280 request: 00:20:41.280 { 00:20:41.280 "name": "nvme0", 00:20:41.280 "trtype": "rdma", 00:20:41.280 "traddr": "192.168.100.8", 00:20:41.280 "adrfam": "ipv4", 00:20:41.280 "trsvcid": "4420", 00:20:41.280 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:41.280 "prchk_reftag": false, 00:20:41.280 "prchk_guard": false, 00:20:41.280 "hdgst": false, 00:20:41.280 "ddgst": false, 00:20:41.280 "dhchap_key": "key3", 00:20:41.280 "method": "bdev_nvme_attach_controller", 00:20:41.280 "req_id": 1 00:20:41.280 } 00:20:41.280 Got JSON-RPC error response 00:20:41.280 response: 00:20:41.280 { 00:20:41.280 "code": -5, 00:20:41.280 "message": "Input/output error" 00:20:41.280 } 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.280 15:02:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.280 request: 00:20:41.280 { 00:20:41.280 "name": "nvme0", 00:20:41.280 "trtype": "rdma", 00:20:41.280 "traddr": "192.168.100.8", 00:20:41.280 "adrfam": "ipv4", 00:20:41.280 "trsvcid": "4420", 00:20:41.280 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:41.280 "prchk_reftag": false, 00:20:41.280 "prchk_guard": false, 00:20:41.280 "hdgst": false, 00:20:41.280 "ddgst": false, 00:20:41.280 "dhchap_key": "key0", 00:20:41.280 "dhchap_ctrlr_key": "key1", 00:20:41.280 "method": "bdev_nvme_attach_controller", 00:20:41.280 "req_id": 1 00:20:41.280 } 00:20:41.280 Got JSON-RPC error response 00:20:41.280 response: 00:20:41.280 { 00:20:41.280 "code": -5, 00:20:41.280 "message": "Input/output error" 00:20:41.280 } 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:41.280 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1816921 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1816921 ']' 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1816921 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1816921 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1816921' 00:20:41.280 killing process with pid 1816921 00:20:41.280 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1816921 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1816921 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:41.281 rmmod nvme_rdma 00:20:41.281 rmmod nvme_fabrics 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1857669 ']' 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1857669 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1857669 ']' 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1857669 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1857669 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1857669' 00:20:41.281 killing process with pid 1857669 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1857669 00:20:41.281 15:02:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1857669 00:20:41.281 15:02:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:41.281 15:02:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.v9i /tmp/spdk.key-sha256.3bt /tmp/spdk.key-sha384.kdW /tmp/spdk.key-sha512.AmL /tmp/spdk.key-sha512.47n /tmp/spdk.key-sha384.anE /tmp/spdk.key-sha256.1hL '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:20:41.281 00:20:41.281 real 4m38.732s 00:20:41.281 user 9m51.837s 00:20:41.281 sys 0m18.204s 00:20:41.281 15:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.281 15:02:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.281 ************************************ 00:20:41.281 END TEST nvmf_auth_target 00:20:41.281 ************************************ 00:20:41.281 15:02:55 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:41.281 15:02:55 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:20:41.281 15:02:55 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:41.281 15:02:55 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:41.281 15:02:55 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:20:41.281 15:02:55 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:20:41.281 15:02:55 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:20:41.281 15:02:55 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:41.281 15:02:55 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.281 15:02:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:41.281 ************************************ 00:20:41.281 START TEST nvmf_srq_overwhelm 00:20:41.281 ************************************ 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:20:41.281 * Looking for test storage... 00:20:41.281 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.281 15:02:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.861 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:47.862 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:47.862 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:47.862 Found net devices under 0000:98:00.0: mlx_0_0 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:47.862 Found net devices under 0000:98:00.1: mlx_0_1 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:47.862 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:47.862 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:20:47.862 altname enp152s0f0np0 00:20:47.862 altname ens817f0np0 00:20:47.862 inet 192.168.100.8/24 scope global mlx_0_0 00:20:47.862 valid_lft forever preferred_lft forever 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:47.862 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:47.862 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:20:47.862 altname enp152s0f1np1 00:20:47.862 altname ens817f1np1 00:20:47.862 inet 192.168.100.9/24 scope global mlx_0_1 00:20:47.862 valid_lft forever preferred_lft forever 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:47.862 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:47.863 192.168.100.9' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:47.863 192.168.100.9' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:47.863 192.168.100.9' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=1874631 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 1874631 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@829 -- # '[' -z 1874631 ']' 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.863 15:03:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:47.863 [2024-07-15 15:03:03.301088] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:47.863 [2024-07-15 15:03:03.301140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.863 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.863 [2024-07-15 15:03:03.366326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.863 [2024-07-15 15:03:03.432769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.863 [2024-07-15 15:03:03.432804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.863 [2024-07-15 15:03:03.432812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.863 [2024-07-15 15:03:03.432818] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.863 [2024-07-15 15:03:03.432824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.863 [2024-07-15 15:03:03.432961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.863 [2024-07-15 15:03:03.432972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.863 [2024-07-15 15:03:03.433110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.863 [2024-07-15 15:03:03.433112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # return 0 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.122 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.122 [2024-07-15 15:03:04.156742] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15ad200/0x15b16f0) succeed. 00:20:48.122 [2024-07-15 15:03:04.169988] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15ae840/0x15f2d80) succeed. 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.382 Malloc0 00:20:48.382 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.383 [2024-07-15 15:03:04.270723] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.383 15:03:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:49.766 Malloc1 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.766 15:03:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.677 Malloc2 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.677 15:03:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:53.062 Malloc3 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.062 15:03:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:54.446 Malloc4 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.446 15:03:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:55.828 Malloc5 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.828 15:03:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:57.212 15:03:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:20:57.212 [global] 00:20:57.212 thread=1 00:20:57.212 invalidate=1 00:20:57.212 rw=read 00:20:57.212 time_based=1 00:20:57.212 runtime=10 00:20:57.212 ioengine=libaio 00:20:57.212 direct=1 00:20:57.212 bs=1048576 00:20:57.212 iodepth=128 00:20:57.212 norandommap=1 00:20:57.212 numjobs=13 00:20:57.212 00:20:57.212 [job0] 00:20:57.212 filename=/dev/nvme0n1 00:20:57.212 [job1] 00:20:57.212 filename=/dev/nvme1n1 00:20:57.212 [job2] 00:20:57.212 filename=/dev/nvme2n1 00:20:57.212 [job3] 00:20:57.212 filename=/dev/nvme3n1 00:20:57.212 [job4] 00:20:57.212 filename=/dev/nvme4n1 00:20:57.212 [job5] 00:20:57.212 filename=/dev/nvme5n1 00:20:57.495 Could not set queue depth (nvme0n1) 00:20:57.495 Could not set queue depth (nvme1n1) 00:20:57.495 Could not set queue depth (nvme2n1) 00:20:57.495 Could not set queue depth (nvme3n1) 00:20:57.495 Could not set queue depth (nvme4n1) 00:20:57.495 Could not set queue depth (nvme5n1) 00:20:57.756 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:57.756 ... 00:20:57.756 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:57.756 ... 00:20:57.756 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:57.756 ... 00:20:57.756 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:57.756 ... 00:20:57.756 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:57.756 ... 00:20:57.756 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:57.756 ... 00:20:57.756 fio-3.35 00:20:57.756 Starting 78 threads 00:21:09.983 00:21:09.983 job0: (groupid=0, jobs=1): err= 0: pid=1877299: Mon Jul 15 15:03:24 2024 00:21:09.983 read: IOPS=70, BW=70.3MiB/s (73.7MB/s)(705MiB/10028msec) 00:21:09.983 slat (usec): min=44, max=2121.1k, avg=14177.90, stdev=110984.61 00:21:09.983 clat (msec): min=24, max=5539, avg=1687.63, stdev=1723.34 00:21:09.983 lat (msec): min=30, max=5551, avg=1701.81, stdev=1729.78 00:21:09.983 clat percentiles (msec): 00:21:09.983 | 1.00th=[ 50], 5.00th=[ 213], 10.00th=[ 439], 20.00th=[ 810], 00:21:09.983 | 30.00th=[ 827], 40.00th=[ 894], 50.00th=[ 1003], 60.00th=[ 1083], 00:21:09.983 | 70.00th=[ 1099], 80.00th=[ 1385], 90.00th=[ 5336], 95.00th=[ 5470], 00:21:09.983 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:21:09.983 | 99.99th=[ 5537] 00:21:09.983 bw ( KiB/s): min=22528, max=159744, per=2.27%, avg=101972.80, stdev=49240.93, samples=10 00:21:09.983 iops : min= 22, max= 156, avg=99.50, stdev=48.11, samples=10 00:21:09.983 lat (msec) : 50=1.13%, 100=1.13%, 250=3.55%, 500=5.11%, 750=5.96% 00:21:09.983 lat (msec) : 1000=32.91%, 2000=30.64%, >=2000=19.57% 00:21:09.983 cpu : usr=0.12%, sys=1.53%, ctx=850, majf=0, minf=32769 00:21:09.983 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:21:09.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.983 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.983 issued rwts: total=705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.983 job0: (groupid=0, jobs=1): err= 0: pid=1877300: Mon Jul 15 15:03:24 2024 00:21:09.983 read: IOPS=4, BW=4263KiB/s (4366kB/s)(44.0MiB/10568msec) 00:21:09.983 slat (usec): min=673, max=2122.7k, avg=239493.16, stdev=655077.52 00:21:09.983 clat (msec): min=29, max=10565, avg=7223.20, stdev=3654.51 00:21:09.983 lat (msec): min=2077, max=10567, avg=7462.70, stdev=3514.76 00:21:09.983 clat percentiles (msec): 00:21:09.983 | 1.00th=[ 30], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:21:09.983 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10402], 00:21:09.983 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:09.983 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.983 | 99.99th=[10537] 00:21:09.983 lat (msec) : 50=2.27%, >=2000=97.73% 00:21:09.983 cpu : usr=0.01%, sys=0.36%, ctx=70, majf=0, minf=11265 00:21:09.983 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:21:09.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.983 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.983 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.983 job0: (groupid=0, jobs=1): err= 0: pid=1877301: Mon Jul 15 15:03:24 2024 00:21:09.983 read: IOPS=19, BW=19.4MiB/s (20.3MB/s)(204MiB/10527msec) 00:21:09.983 slat (usec): min=712, max=2273.3k, avg=51589.51, stdev=266028.59 00:21:09.983 clat (usec): min=909, max=5884.8k, avg=3597852.07, stdev=1501032.77 00:21:09.983 lat (msec): min=1289, max=5900, avg=3649.44, stdev=1482.40 00:21:09.983 clat percentiles (msec): 00:21:09.983 | 1.00th=[ 1284], 5.00th=[ 1385], 10.00th=[ 1502], 20.00th=[ 1670], 00:21:09.983 | 30.00th=[ 1854], 40.00th=[ 3876], 50.00th=[ 4329], 60.00th=[ 4463], 00:21:09.983 | 70.00th=[ 4597], 80.00th=[ 4933], 90.00th=[ 5134], 95.00th=[ 5269], 00:21:09.983 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:21:09.983 | 99.99th=[ 5873] 00:21:09.983 bw ( KiB/s): min=10240, max=92160, per=0.86%, avg=38912.00, stdev=36482.77, samples=4 00:21:09.983 iops : min= 10, max= 90, avg=38.00, stdev=35.63, samples=4 00:21:09.983 lat (usec) : 1000=0.49% 00:21:09.983 lat (msec) : 2000=32.84%, >=2000=66.67% 00:21:09.983 cpu : usr=0.00%, sys=1.00%, ctx=673, majf=0, minf=32769 00:21:09.983 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:21:09.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.983 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:21:09.984 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877302: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=18, BW=18.7MiB/s (19.6MB/s)(197MiB/10540msec) 00:21:09.984 slat (usec): min=734, max=2198.2k, avg=53493.33, stdev=289343.58 00:21:09.984 clat (usec): min=367, max=5558.4k, avg=3743325.79, stdev=1807050.85 00:21:09.984 lat (msec): min=1111, max=5580, avg=3796.82, stdev=1782.21 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 1099], 5.00th=[ 1133], 10.00th=[ 1133], 20.00th=[ 1200], 00:21:09.984 | 30.00th=[ 1401], 40.00th=[ 4597], 50.00th=[ 4799], 60.00th=[ 4933], 00:21:09.984 | 70.00th=[ 5067], 80.00th=[ 5201], 90.00th=[ 5403], 95.00th=[ 5470], 00:21:09.984 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:21:09.984 | 99.99th=[ 5537] 00:21:09.984 bw ( KiB/s): min= 2048, max=122880, per=0.63%, avg=28265.60, stdev=52938.39, samples=5 00:21:09.984 iops : min= 2, max= 120, avg=27.60, stdev=51.70, samples=5 00:21:09.984 lat (usec) : 500=0.51% 00:21:09.984 lat (msec) : 2000=30.96%, >=2000=68.53% 00:21:09.984 cpu : usr=0.00%, sys=0.62%, ctx=497, majf=0, minf=32769 00:21:09.984 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.1%, 32=16.2%, >=64=68.0% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:21:09.984 issued rwts: total=197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877303: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=14, BW=14.3MiB/s (15.0MB/s)(151MiB/10570msec) 00:21:09.984 slat (usec): min=38, max=2160.7k, avg=69797.77, stdev=324309.91 00:21:09.984 clat (msec): min=29, max=10116, avg=7918.95, stdev=2179.93 00:21:09.984 lat (msec): min=2145, max=10122, avg=7988.75, stdev=2084.29 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 2140], 5.00th=[ 3876], 10.00th=[ 4010], 20.00th=[ 6409], 00:21:09.984 | 30.00th=[ 8020], 40.00th=[ 8490], 50.00th=[ 8792], 60.00th=[ 9060], 00:21:09.984 | 70.00th=[ 9329], 80.00th=[ 9597], 90.00th=[ 9866], 95.00th=[10000], 00:21:09.984 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:09.984 | 99.99th=[10134] 00:21:09.984 bw ( KiB/s): min= 2048, max=38834, per=0.26%, avg=11757.50, stdev=18076.75, samples=4 00:21:09.984 iops : min= 2, max= 37, avg=11.25, stdev=17.19, samples=4 00:21:09.984 lat (msec) : 50=0.66%, >=2000=99.34% 00:21:09.984 cpu : usr=0.01%, sys=0.57%, ctx=465, majf=0, minf=32769 00:21:09.984 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.6%, 32=21.2%, >=64=58.3% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=96.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.0% 00:21:09.984 issued rwts: total=151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877304: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=86, BW=86.3MiB/s (90.5MB/s)(923MiB/10691msec) 00:21:09.984 slat (usec): min=33, max=2051.8k, avg=11555.94, stdev=68477.36 00:21:09.984 clat (msec): min=19, max=2768, avg=1342.92, stdev=618.35 00:21:09.984 lat (msec): min=657, max=2769, avg=1354.47, stdev=618.25 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 659], 5.00th=[ 693], 10.00th=[ 701], 20.00th=[ 709], 00:21:09.984 | 30.00th=[ 735], 40.00th=[ 911], 50.00th=[ 1385], 60.00th=[ 1552], 00:21:09.984 | 70.00th=[ 1620], 80.00th=[ 1838], 90.00th=[ 2265], 95.00th=[ 2500], 00:21:09.984 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2769], 99.95th=[ 2769], 00:21:09.984 | 99.99th=[ 2769] 00:21:09.984 bw ( KiB/s): min=14336, max=204391, per=2.58%, avg=116244.79, stdev=57933.45, samples=14 00:21:09.984 iops : min= 14, max= 199, avg=113.36, stdev=56.59, samples=14 00:21:09.984 lat (msec) : 20=0.11%, 750=32.07%, 1000=9.32%, 2000=41.60%, >=2000=16.90% 00:21:09.984 cpu : usr=0.07%, sys=1.48%, ctx=1555, majf=0, minf=32769 00:21:09.984 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.2% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.984 issued rwts: total=923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877305: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=122, BW=123MiB/s (129MB/s)(1231MiB/10023msec) 00:21:09.984 slat (usec): min=29, max=74509, avg=8120.08, stdev=11822.51 00:21:09.984 clat (msec): min=19, max=2702, avg=867.66, stdev=450.95 00:21:09.984 lat (msec): min=24, max=2715, avg=875.78, stdev=455.73 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 39], 5.00th=[ 114], 10.00th=[ 380], 20.00th=[ 684], 00:21:09.984 | 30.00th=[ 709], 40.00th=[ 726], 50.00th=[ 743], 60.00th=[ 776], 00:21:09.984 | 70.00th=[ 927], 80.00th=[ 1167], 90.00th=[ 1469], 95.00th=[ 1670], 00:21:09.984 | 99.00th=[ 2500], 99.50th=[ 2601], 99.90th=[ 2702], 99.95th=[ 2702], 00:21:09.984 | 99.99th=[ 2702] 00:21:09.984 bw ( KiB/s): min=55296, max=296960, per=3.35%, avg=150721.13, stdev=60809.15, samples=15 00:21:09.984 iops : min= 54, max= 290, avg=147.07, stdev=59.41, samples=15 00:21:09.984 lat (msec) : 20=0.08%, 50=1.38%, 100=2.60%, 250=3.82%, 500=4.96% 00:21:09.984 lat (msec) : 750=39.56%, 1000=21.12%, 2000=23.72%, >=2000=2.76% 00:21:09.984 cpu : usr=0.06%, sys=1.69%, ctx=1801, majf=0, minf=32769 00:21:09.984 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.984 issued rwts: total=1231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877306: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=35, BW=35.5MiB/s (37.2MB/s)(384MiB/10815msec) 00:21:09.984 slat (usec): min=28, max=2136.0k, avg=28108.20, stdev=168129.30 00:21:09.984 clat (msec): min=19, max=5393, avg=3110.90, stdev=1728.15 00:21:09.984 lat (msec): min=789, max=5393, avg=3139.01, stdev=1721.32 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 793], 5.00th=[ 802], 10.00th=[ 802], 20.00th=[ 902], 00:21:09.984 | 30.00th=[ 1418], 40.00th=[ 2123], 50.00th=[ 3742], 60.00th=[ 4077], 00:21:09.984 | 70.00th=[ 4597], 80.00th=[ 4799], 90.00th=[ 5067], 95.00th=[ 5269], 00:21:09.984 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:21:09.984 | 99.99th=[ 5403] 00:21:09.984 bw ( KiB/s): min= 8192, max=159744, per=1.66%, avg=74881.86, stdev=57530.70, samples=7 00:21:09.984 iops : min= 8, max= 156, avg=73.00, stdev=56.23, samples=7 00:21:09.984 lat (msec) : 20=0.26%, 1000=27.86%, 2000=10.68%, >=2000=61.20% 00:21:09.984 cpu : usr=0.02%, sys=1.44%, ctx=829, majf=0, minf=32769 00:21:09.984 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:09.984 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877307: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=53, BW=53.7MiB/s (56.3MB/s)(543MiB/10112msec) 00:21:09.984 slat (usec): min=33, max=2092.8k, avg=18510.83, stdev=125856.53 00:21:09.984 clat (msec): min=56, max=5956, avg=1776.63, stdev=1574.45 00:21:09.984 lat (msec): min=116, max=5968, avg=1795.14, stdev=1583.67 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 122], 5.00th=[ 213], 10.00th=[ 384], 20.00th=[ 651], 00:21:09.984 | 30.00th=[ 751], 40.00th=[ 802], 50.00th=[ 1401], 60.00th=[ 1703], 00:21:09.984 | 70.00th=[ 2140], 80.00th=[ 2366], 90.00th=[ 5403], 95.00th=[ 5738], 00:21:09.984 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:21:09.984 | 99.99th=[ 5940] 00:21:09.984 bw ( KiB/s): min=20480, max=186729, per=2.09%, avg=93961.89, stdev=58868.03, samples=9 00:21:09.984 iops : min= 20, max= 182, avg=91.56, stdev=57.31, samples=9 00:21:09.984 lat (msec) : 100=0.18%, 250=5.71%, 500=8.47%, 750=16.39%, 1000=12.15% 00:21:09.984 lat (msec) : 2000=25.78%, >=2000=31.31% 00:21:09.984 cpu : usr=0.06%, sys=2.19%, ctx=1260, majf=0, minf=32769 00:21:09.984 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.984 issued rwts: total=543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877308: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=3, BW=3779KiB/s (3870kB/s)(39.0MiB/10567msec) 00:21:09.984 slat (usec): min=1896, max=2140.2k, avg=270163.32, stdev=650059.26 00:21:09.984 clat (msec): min=29, max=10500, avg=3494.31, stdev=2851.85 00:21:09.984 lat (msec): min=1334, max=10566, avg=3764.47, stdev=3009.72 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 30], 5.00th=[ 1334], 10.00th=[ 1351], 20.00th=[ 1452], 00:21:09.984 | 30.00th=[ 1620], 40.00th=[ 1720], 50.00th=[ 1921], 60.00th=[ 2039], 00:21:09.984 | 70.00th=[ 4279], 80.00th=[ 6409], 90.00th=[ 8557], 95.00th=[ 8658], 00:21:09.984 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.984 | 99.99th=[10537] 00:21:09.984 lat (msec) : 50=2.56%, 2000=53.85%, >=2000=43.59% 00:21:09.984 cpu : usr=0.02%, sys=0.22%, ctx=208, majf=0, minf=9985 00:21:09.984 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.984 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877309: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=9, BW=9712KiB/s (9945kB/s)(100MiB/10544msec) 00:21:09.984 slat (usec): min=780, max=2821.5k, avg=105056.80, stdev=436009.12 00:21:09.984 clat (msec): min=37, max=10510, avg=5438.50, stdev=1351.72 00:21:09.984 lat (msec): min=2016, max=10543, avg=5543.55, stdev=1335.86 00:21:09.984 clat percentiles (msec): 00:21:09.984 | 1.00th=[ 37], 5.00th=[ 2140], 10.00th=[ 5000], 20.00th=[ 5067], 00:21:09.984 | 30.00th=[ 5201], 40.00th=[ 5403], 50.00th=[ 5537], 60.00th=[ 5671], 00:21:09.984 | 70.00th=[ 5873], 80.00th=[ 6007], 90.00th=[ 6342], 95.00th=[ 6477], 00:21:09.984 | 99.00th=[ 8658], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.984 | 99.99th=[10537] 00:21:09.984 lat (msec) : 50=1.00%, >=2000=99.00% 00:21:09.984 cpu : usr=0.01%, sys=0.38%, ctx=365, majf=0, minf=25601 00:21:09.984 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.0%, 16=16.0%, 32=32.0%, >=64=37.0% 00:21:09.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.984 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.984 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.984 job0: (groupid=0, jobs=1): err= 0: pid=1877310: Mon Jul 15 15:03:24 2024 00:21:09.984 read: IOPS=99, BW=99.6MiB/s (104MB/s)(1001MiB/10054msec) 00:21:09.984 slat (usec): min=23, max=2081.9k, avg=9992.20, stdev=89715.57 00:21:09.985 clat (msec): min=47, max=5433, avg=762.95, stdev=772.92 00:21:09.985 lat (msec): min=68, max=5456, avg=772.94, stdev=787.42 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 103], 5.00th=[ 305], 10.00th=[ 305], 20.00th=[ 317], 00:21:09.985 | 30.00th=[ 363], 40.00th=[ 397], 50.00th=[ 426], 60.00th=[ 567], 00:21:09.985 | 70.00th=[ 869], 80.00th=[ 1133], 90.00th=[ 1620], 95.00th=[ 1687], 00:21:09.985 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:21:09.985 | 99.99th=[ 5403] 00:21:09.985 bw ( KiB/s): min=24576, max=370688, per=3.96%, avg=178354.30, stdev=143287.04, samples=10 00:21:09.985 iops : min= 24, max= 362, avg=173.90, stdev=140.08, samples=10 00:21:09.985 lat (msec) : 50=0.10%, 100=0.80%, 250=1.70%, 500=52.95%, 750=9.59% 00:21:09.985 lat (msec) : 1000=9.19%, 2000=23.58%, >=2000=2.10% 00:21:09.985 cpu : usr=0.04%, sys=1.71%, ctx=1583, majf=0, minf=32769 00:21:09.985 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.985 issued rwts: total=1001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job0: (groupid=0, jobs=1): err= 0: pid=1877311: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=52, BW=52.6MiB/s (55.1MB/s)(531MiB/10101msec) 00:21:09.985 slat (usec): min=46, max=2155.7k, avg=18872.72, stdev=129306.76 00:21:09.985 clat (msec): min=76, max=7641, avg=2293.69, stdev=1466.50 00:21:09.985 lat (msec): min=102, max=7693, avg=2312.56, stdev=1480.09 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 131], 5.00th=[ 279], 10.00th=[ 426], 20.00th=[ 802], 00:21:09.985 | 30.00th=[ 1200], 40.00th=[ 1301], 50.00th=[ 1603], 60.00th=[ 3507], 00:21:09.985 | 70.00th=[ 3675], 80.00th=[ 3742], 90.00th=[ 3943], 95.00th=[ 3977], 00:21:09.985 | 99.00th=[ 3977], 99.50th=[ 5537], 99.90th=[ 7617], 99.95th=[ 7617], 00:21:09.985 | 99.99th=[ 7617] 00:21:09.985 bw ( KiB/s): min=30658, max=155784, per=1.65%, avg=74091.09, stdev=35383.89, samples=11 00:21:09.985 iops : min= 29, max= 152, avg=72.18, stdev=34.61, samples=11 00:21:09.985 lat (msec) : 100=0.19%, 250=3.58%, 500=7.91%, 750=7.91%, 1000=2.64% 00:21:09.985 lat (msec) : 2000=30.13%, >=2000=47.65% 00:21:09.985 cpu : usr=0.01%, sys=1.66%, ctx=1179, majf=0, minf=32769 00:21:09.985 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.985 issued rwts: total=531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job1: (groupid=0, jobs=1): err= 0: pid=1877329: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=38, BW=38.1MiB/s (40.0MB/s)(383MiB/10047msec) 00:21:09.985 slat (usec): min=26, max=2072.6k, avg=26132.50, stdev=179737.90 00:21:09.985 clat (msec): min=35, max=6817, avg=1764.36, stdev=1657.41 00:21:09.985 lat (msec): min=46, max=6824, avg=1790.49, stdev=1674.09 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 89], 5.00th=[ 342], 10.00th=[ 401], 20.00th=[ 642], 00:21:09.985 | 30.00th=[ 1150], 40.00th=[ 1318], 50.00th=[ 1485], 60.00th=[ 1620], 00:21:09.985 | 70.00th=[ 1703], 80.00th=[ 1787], 90.00th=[ 2567], 95.00th=[ 6745], 00:21:09.985 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:21:09.985 | 99.99th=[ 6812] 00:21:09.985 bw ( KiB/s): min=36864, max=120832, per=1.66%, avg=74752.00, stdev=30890.20, samples=6 00:21:09.985 iops : min= 36, max= 118, avg=73.00, stdev=30.17, samples=6 00:21:09.985 lat (msec) : 50=0.52%, 100=1.04%, 250=1.57%, 500=12.27%, 750=6.27% 00:21:09.985 lat (msec) : 1000=5.22%, 2000=58.22%, >=2000=14.88% 00:21:09.985 cpu : usr=0.00%, sys=1.24%, ctx=659, majf=0, minf=32769 00:21:09.985 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.6% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:09.985 issued rwts: total=383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job1: (groupid=0, jobs=1): err= 0: pid=1877330: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=5, BW=5985KiB/s (6129kB/s)(62.0MiB/10608msec) 00:21:09.985 slat (usec): min=661, max=2113.6k, avg=170620.79, stdev=487504.94 00:21:09.985 clat (msec): min=28, max=10606, avg=5277.41, stdev=3452.77 00:21:09.985 lat (msec): min=1912, max=10607, avg=5448.03, stdev=3450.54 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 29], 5.00th=[ 1921], 10.00th=[ 2123], 20.00th=[ 2534], 00:21:09.985 | 30.00th=[ 2668], 40.00th=[ 3037], 50.00th=[ 3675], 60.00th=[ 4077], 00:21:09.985 | 70.00th=[ 6409], 80.00th=[10537], 90.00th=[10537], 95.00th=[10671], 00:21:09.985 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:09.985 | 99.99th=[10671] 00:21:09.985 lat (msec) : 50=1.61%, 2000=4.84%, >=2000=93.55% 00:21:09.985 cpu : usr=0.00%, sys=0.44%, ctx=331, majf=0, minf=15873 00:21:09.985 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.985 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job1: (groupid=0, jobs=1): err= 0: pid=1877331: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=25, BW=25.7MiB/s (27.0MB/s)(277MiB/10758msec) 00:21:09.985 slat (usec): min=54, max=2145.0k, avg=38724.55, stdev=212794.63 00:21:09.985 clat (msec): min=29, max=9070, avg=4719.92, stdev=2304.38 00:21:09.985 lat (msec): min=1201, max=9095, avg=4758.65, stdev=2306.64 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 1200], 5.00th=[ 1368], 10.00th=[ 1653], 20.00th=[ 2265], 00:21:09.985 | 30.00th=[ 3071], 40.00th=[ 3608], 50.00th=[ 4010], 60.00th=[ 6477], 00:21:09.985 | 70.00th=[ 6611], 80.00th=[ 6879], 90.00th=[ 7282], 95.00th=[ 7416], 00:21:09.985 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:21:09.985 | 99.99th=[ 9060] 00:21:09.985 bw ( KiB/s): min= 2048, max=57344, per=0.75%, avg=33904.78, stdev=19939.76, samples=9 00:21:09.985 iops : min= 2, max= 56, avg=33.00, stdev=19.66, samples=9 00:21:09.985 lat (msec) : 50=0.36%, 2000=13.00%, >=2000=86.64% 00:21:09.985 cpu : usr=0.00%, sys=1.57%, ctx=657, majf=0, minf=32769 00:21:09.985 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.3% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:21:09.985 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job1: (groupid=0, jobs=1): err= 0: pid=1877332: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=16, BW=16.3MiB/s (17.1MB/s)(164MiB/10078msec) 00:21:09.985 slat (usec): min=52, max=2125.1k, avg=60985.78, stdev=315990.88 00:21:09.985 clat (msec): min=74, max=9856, avg=3124.20, stdev=3795.87 00:21:09.985 lat (msec): min=108, max=9867, avg=3185.18, stdev=3826.39 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 109], 5.00th=[ 122], 10.00th=[ 186], 20.00th=[ 376], 00:21:09.985 | 30.00th=[ 584], 40.00th=[ 785], 50.00th=[ 961], 60.00th=[ 1200], 00:21:09.985 | 70.00th=[ 3708], 80.00th=[ 9597], 90.00th=[ 9731], 95.00th=[ 9866], 00:21:09.985 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:21:09.985 | 99.99th=[ 9866] 00:21:09.985 bw ( KiB/s): min=74729, max=74729, per=1.66%, avg=74729.00, stdev= 0.00, samples=1 00:21:09.985 iops : min= 72, max= 72, avg=72.00, stdev= 0.00, samples=1 00:21:09.985 lat (msec) : 100=0.61%, 250=12.80%, 500=12.20%, 750=13.41%, 1000=11.59% 00:21:09.985 lat (msec) : 2000=18.29%, >=2000=31.10% 00:21:09.985 cpu : usr=0.01%, sys=1.05%, ctx=385, majf=0, minf=32769 00:21:09.985 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.9%, 16=9.8%, 32=19.5%, >=64=61.6% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:21:09.985 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job1: (groupid=0, jobs=1): err= 0: pid=1877333: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=9, BW=9940KiB/s (10.2MB/s)(105MiB/10817msec) 00:21:09.985 slat (usec): min=307, max=2139.2k, avg=102766.65, stdev=429219.98 00:21:09.985 clat (msec): min=25, max=10812, avg=8037.39, stdev=3800.80 00:21:09.985 lat (msec): min=1854, max=10816, avg=8140.15, stdev=3727.27 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 1854], 5.00th=[ 1854], 10.00th=[ 1905], 20.00th=[ 2022], 00:21:09.985 | 30.00th=[ 6409], 40.00th=[10537], 50.00th=[10671], 60.00th=[10671], 00:21:09.985 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:21:09.985 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:21:09.985 | 99.99th=[10805] 00:21:09.985 lat (msec) : 50=0.95%, 2000=17.14%, >=2000=81.90% 00:21:09.985 cpu : usr=0.01%, sys=1.13%, ctx=173, majf=0, minf=26881 00:21:09.985 IO depths : 1=1.0%, 2=1.9%, 4=3.8%, 8=7.6%, 16=15.2%, 32=30.5%, >=64=40.0% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.985 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job1: (groupid=0, jobs=1): err= 0: pid=1877334: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=5, BW=5366KiB/s (5495kB/s)(55.0MiB/10496msec) 00:21:09.985 slat (usec): min=332, max=4080.2k, avg=190249.80, stdev=653255.04 00:21:09.985 clat (msec): min=31, max=10478, avg=2992.57, stdev=1387.20 00:21:09.985 lat (msec): min=1916, max=10495, avg=3182.82, stdev=1663.63 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 32], 5.00th=[ 1921], 10.00th=[ 1921], 20.00th=[ 2123], 00:21:09.985 | 30.00th=[ 2299], 40.00th=[ 2467], 50.00th=[ 2735], 60.00th=[ 2937], 00:21:09.985 | 70.00th=[ 3306], 80.00th=[ 3708], 90.00th=[ 4077], 95.00th=[ 4245], 00:21:09.985 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.985 | 99.99th=[10537] 00:21:09.985 lat (msec) : 50=1.82%, 2000=9.09%, >=2000=89.09% 00:21:09.985 cpu : usr=0.01%, sys=0.30%, ctx=315, majf=0, minf=14081 00:21:09.985 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:21:09.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.985 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.985 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.985 job1: (groupid=0, jobs=1): err= 0: pid=1877335: Mon Jul 15 15:03:24 2024 00:21:09.985 read: IOPS=7, BW=8064KiB/s (8257kB/s)(83.0MiB/10540msec) 00:21:09.985 slat (usec): min=649, max=2166.9k, avg=126483.21, stdev=439441.11 00:21:09.985 clat (msec): min=41, max=10516, avg=3280.56, stdev=1435.36 00:21:09.985 lat (msec): min=2116, max=10539, avg=3407.04, stdev=1599.62 00:21:09.985 clat percentiles (msec): 00:21:09.985 | 1.00th=[ 42], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 2366], 00:21:09.985 | 30.00th=[ 2534], 40.00th=[ 2769], 50.00th=[ 2937], 60.00th=[ 3306], 00:21:09.986 | 70.00th=[ 3574], 80.00th=[ 3977], 90.00th=[ 4245], 95.00th=[ 4329], 00:21:09.986 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.986 | 99.99th=[10537] 00:21:09.986 lat (msec) : 50=1.20%, >=2000=98.80% 00:21:09.986 cpu : usr=0.01%, sys=0.40%, ctx=299, majf=0, minf=21249 00:21:09.986 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.986 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job1: (groupid=0, jobs=1): err= 0: pid=1877336: Mon Jul 15 15:03:24 2024 00:21:09.986 read: IOPS=78, BW=78.2MiB/s (82.0MB/s)(824MiB/10542msec) 00:21:09.986 slat (usec): min=31, max=2115.1k, avg=12785.80, stdev=90324.75 00:21:09.986 clat (usec): min=953, max=5804.1k, avg=1538117.55, stdev=1543857.38 00:21:09.986 lat (msec): min=551, max=5806, avg=1550.90, stdev=1548.70 00:21:09.986 clat percentiles (msec): 00:21:09.986 | 1.00th=[ 600], 5.00th=[ 625], 10.00th=[ 709], 20.00th=[ 785], 00:21:09.986 | 30.00th=[ 835], 40.00th=[ 852], 50.00th=[ 877], 60.00th=[ 936], 00:21:09.986 | 70.00th=[ 961], 80.00th=[ 1351], 90.00th=[ 4799], 95.00th=[ 5671], 00:21:09.986 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:21:09.986 | 99.99th=[ 5805] 00:21:09.986 bw ( KiB/s): min= 4096, max=194171, per=2.43%, avg=109526.69, stdev=68004.29, samples=13 00:21:09.986 iops : min= 4, max= 189, avg=106.85, stdev=66.42, samples=13 00:21:09.986 lat (usec) : 1000=0.12% 00:21:09.986 lat (msec) : 750=13.47%, 1000=63.35%, 2000=7.40%, >=2000=15.66% 00:21:09.986 cpu : usr=0.05%, sys=1.33%, ctx=1495, majf=0, minf=32769 00:21:09.986 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.986 issued rwts: total=824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job1: (groupid=0, jobs=1): err= 0: pid=1877337: Mon Jul 15 15:03:24 2024 00:21:09.986 read: IOPS=77, BW=77.4MiB/s (81.1MB/s)(780MiB/10080msec) 00:21:09.986 slat (usec): min=36, max=2121.1k, avg=12828.55, stdev=105424.75 00:21:09.986 clat (msec): min=66, max=5223, avg=1549.41, stdev=1525.89 00:21:09.986 lat (msec): min=82, max=5232, avg=1562.23, stdev=1531.15 00:21:09.986 clat percentiles (msec): 00:21:09.986 | 1.00th=[ 112], 5.00th=[ 305], 10.00th=[ 558], 20.00th=[ 818], 00:21:09.986 | 30.00th=[ 844], 40.00th=[ 911], 50.00th=[ 978], 60.00th=[ 1011], 00:21:09.986 | 70.00th=[ 1070], 80.00th=[ 1099], 90.00th=[ 5000], 95.00th=[ 5134], 00:21:09.986 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201], 00:21:09.986 | 99.99th=[ 5201] 00:21:09.986 bw ( KiB/s): min=10240, max=161792, per=2.46%, avg=110901.83, stdev=51313.96, samples=12 00:21:09.986 iops : min= 10, max= 158, avg=108.25, stdev=50.06, samples=12 00:21:09.986 lat (msec) : 100=0.64%, 250=3.21%, 500=4.62%, 750=5.26%, 1000=41.92% 00:21:09.986 lat (msec) : 2000=26.41%, >=2000=17.95% 00:21:09.986 cpu : usr=0.09%, sys=1.44%, ctx=846, majf=0, minf=32769 00:21:09.986 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.986 issued rwts: total=780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job1: (groupid=0, jobs=1): err= 0: pid=1877339: Mon Jul 15 15:03:24 2024 00:21:09.986 read: IOPS=59, BW=59.1MiB/s (61.9MB/s)(628MiB/10634msec) 00:21:09.986 slat (usec): min=32, max=2084.0k, avg=16861.58, stdev=157966.04 00:21:09.986 clat (msec): min=41, max=8566, avg=1932.31, stdev=2247.64 00:21:09.986 lat (msec): min=509, max=10293, avg=1949.17, stdev=2262.58 00:21:09.986 clat percentiles (msec): 00:21:09.986 | 1.00th=[ 510], 5.00th=[ 510], 10.00th=[ 510], 20.00th=[ 514], 00:21:09.986 | 30.00th=[ 518], 40.00th=[ 518], 50.00th=[ 523], 60.00th=[ 558], 00:21:09.986 | 70.00th=[ 2165], 80.00th=[ 4279], 90.00th=[ 6544], 95.00th=[ 6678], 00:21:09.986 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 8557], 99.95th=[ 8557], 00:21:09.986 | 99.99th=[ 8557] 00:21:09.986 bw ( KiB/s): min=14336, max=253952, per=2.84%, avg=128000.00, stdev=108695.70, samples=8 00:21:09.986 iops : min= 14, max= 248, avg=125.00, stdev=106.15, samples=8 00:21:09.986 lat (msec) : 50=0.16%, 750=63.54%, >=2000=36.31% 00:21:09.986 cpu : usr=0.02%, sys=0.96%, ctx=578, majf=0, minf=32769 00:21:09.986 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.986 issued rwts: total=628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job1: (groupid=0, jobs=1): err= 0: pid=1877340: Mon Jul 15 15:03:24 2024 00:21:09.986 read: IOPS=152, BW=153MiB/s (160MB/s)(1529MiB/10013msec) 00:21:09.986 slat (usec): min=25, max=213488, avg=6534.44, stdev=14805.53 00:21:09.986 clat (msec): min=12, max=1482, avg=768.57, stdev=337.18 00:21:09.986 lat (msec): min=12, max=1494, avg=775.10, stdev=339.47 00:21:09.986 clat percentiles (msec): 00:21:09.986 | 1.00th=[ 23], 5.00th=[ 77], 10.00th=[ 182], 20.00th=[ 609], 00:21:09.986 | 30.00th=[ 625], 40.00th=[ 693], 50.00th=[ 751], 60.00th=[ 802], 00:21:09.986 | 70.00th=[ 894], 80.00th=[ 1011], 90.00th=[ 1284], 95.00th=[ 1351], 00:21:09.986 | 99.00th=[ 1452], 99.50th=[ 1469], 99.90th=[ 1485], 99.95th=[ 1485], 00:21:09.986 | 99.99th=[ 1485] 00:21:09.986 bw ( KiB/s): min= 6144, max=221184, per=3.26%, avg=146810.59, stdev=56537.46, samples=17 00:21:09.986 iops : min= 6, max= 216, avg=143.29, stdev=55.16, samples=17 00:21:09.986 lat (msec) : 20=0.78%, 50=2.81%, 100=2.22%, 250=4.91%, 500=1.90% 00:21:09.986 lat (msec) : 750=37.41%, 1000=28.97%, 2000=20.99% 00:21:09.986 cpu : usr=0.10%, sys=1.97%, ctx=1794, majf=0, minf=32769 00:21:09.986 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.986 issued rwts: total=1529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job1: (groupid=0, jobs=1): err= 0: pid=1877341: Mon Jul 15 15:03:24 2024 00:21:09.986 read: IOPS=299, BW=300MiB/s (314MB/s)(3005MiB/10026msec) 00:21:09.986 slat (usec): min=25, max=99317, avg=3324.38, stdev=5722.15 00:21:09.986 clat (msec): min=25, max=1595, avg=412.61, stdev=317.44 00:21:09.986 lat (msec): min=26, max=1606, avg=415.94, stdev=319.74 00:21:09.986 clat percentiles (msec): 00:21:09.986 | 1.00th=[ 78], 5.00th=[ 205], 10.00th=[ 207], 20.00th=[ 209], 00:21:09.986 | 30.00th=[ 211], 40.00th=[ 215], 50.00th=[ 313], 60.00th=[ 334], 00:21:09.986 | 70.00th=[ 418], 80.00th=[ 430], 90.00th=[ 835], 95.00th=[ 1250], 00:21:09.986 | 99.00th=[ 1502], 99.50th=[ 1536], 99.90th=[ 1552], 99.95th=[ 1569], 00:21:09.986 | 99.99th=[ 1603] 00:21:09.986 bw ( KiB/s): min=77824, max=622592, per=6.89%, avg=310206.05, stdev=199796.43, samples=19 00:21:09.986 iops : min= 76, max= 608, avg=302.89, stdev=195.16, samples=19 00:21:09.986 lat (msec) : 50=0.53%, 100=0.93%, 250=41.80%, 500=37.84%, 750=4.99% 00:21:09.986 lat (msec) : 1000=6.46%, 2000=7.45% 00:21:09.986 cpu : usr=0.07%, sys=3.44%, ctx=3482, majf=0, minf=32206 00:21:09.986 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.986 issued rwts: total=3005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job1: (groupid=0, jobs=1): err= 0: pid=1877342: Mon Jul 15 15:03:24 2024 00:21:09.986 read: IOPS=32, BW=32.0MiB/s (33.6MB/s)(323MiB/10087msec) 00:21:09.986 slat (usec): min=877, max=2136.7k, avg=30982.36, stdev=202147.38 00:21:09.986 clat (msec): min=76, max=8270, avg=3801.53, stdev=3357.75 00:21:09.986 lat (msec): min=110, max=8282, avg=3832.52, stdev=3363.04 00:21:09.986 clat percentiles (msec): 00:21:09.986 | 1.00th=[ 123], 5.00th=[ 347], 10.00th=[ 592], 20.00th=[ 1028], 00:21:09.986 | 30.00th=[ 1183], 40.00th=[ 1368], 50.00th=[ 1418], 60.00th=[ 3608], 00:21:09.986 | 70.00th=[ 7953], 80.00th=[ 8020], 90.00th=[ 8087], 95.00th=[ 8154], 00:21:09.986 | 99.00th=[ 8221], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:21:09.986 | 99.99th=[ 8288] 00:21:09.986 bw ( KiB/s): min= 4096, max=86016, per=1.12%, avg=50176.00, stdev=32381.72, samples=8 00:21:09.986 iops : min= 4, max= 84, avg=49.00, stdev=31.62, samples=8 00:21:09.986 lat (msec) : 100=0.31%, 250=3.10%, 500=4.95%, 750=4.95%, 1000=5.57% 00:21:09.986 lat (msec) : 2000=39.94%, >=2000=41.18% 00:21:09.986 cpu : usr=0.04%, sys=1.75%, ctx=707, majf=0, minf=32769 00:21:09.986 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.5% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:09.986 issued rwts: total=323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job2: (groupid=0, jobs=1): err= 0: pid=1877351: Mon Jul 15 15:03:24 2024 00:21:09.986 read: IOPS=41, BW=41.4MiB/s (43.4MB/s)(436MiB/10524msec) 00:21:09.986 slat (usec): min=26, max=2071.8k, avg=24061.24, stdev=167593.25 00:21:09.986 clat (msec): min=30, max=5428, avg=2840.23, stdev=1620.58 00:21:09.986 lat (msec): min=872, max=5441, avg=2864.29, stdev=1617.03 00:21:09.986 clat percentiles (msec): 00:21:09.986 | 1.00th=[ 877], 5.00th=[ 885], 10.00th=[ 953], 20.00th=[ 1200], 00:21:09.986 | 30.00th=[ 1418], 40.00th=[ 2165], 50.00th=[ 2802], 60.00th=[ 3071], 00:21:09.986 | 70.00th=[ 3339], 80.00th=[ 5201], 90.00th=[ 5269], 95.00th=[ 5336], 00:21:09.986 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:21:09.986 | 99.99th=[ 5403] 00:21:09.986 bw ( KiB/s): min=34746, max=151552, per=2.00%, avg=90102.00, stdev=39499.04, samples=7 00:21:09.986 iops : min= 33, max= 148, avg=87.86, stdev=38.79, samples=7 00:21:09.986 lat (msec) : 50=0.23%, 1000=15.37%, 2000=22.25%, >=2000=62.16% 00:21:09.986 cpu : usr=0.03%, sys=1.06%, ctx=781, majf=0, minf=32769 00:21:09.986 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.3%, >=64=85.6% 00:21:09.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.986 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:09.986 issued rwts: total=436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.986 job2: (groupid=0, jobs=1): err= 0: pid=1877352: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=29, BW=29.8MiB/s (31.2MB/s)(321MiB/10784msec) 00:21:09.987 slat (usec): min=81, max=2138.2k, avg=33417.33, stdev=231483.04 00:21:09.987 clat (msec): min=54, max=9517, avg=4109.85, stdev=3923.60 00:21:09.987 lat (msec): min=818, max=9522, avg=4143.26, stdev=3925.57 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 818], 5.00th=[ 835], 10.00th=[ 852], 20.00th=[ 869], 00:21:09.987 | 30.00th=[ 885], 40.00th=[ 902], 50.00th=[ 919], 60.00th=[ 4279], 00:21:09.987 | 70.00th=[ 8792], 80.00th=[ 9060], 90.00th=[ 9329], 95.00th=[ 9329], 00:21:09.987 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:21:09.987 | 99.99th=[ 9463] 00:21:09.987 bw ( KiB/s): min= 2048, max=147456, per=1.10%, avg=49408.00, stdev=60232.87, samples=8 00:21:09.987 iops : min= 2, max= 144, avg=48.25, stdev=58.82, samples=8 00:21:09.987 lat (msec) : 100=0.31%, 1000=56.70%, 2000=0.31%, >=2000=42.68% 00:21:09.987 cpu : usr=0.01%, sys=1.50%, ctx=648, majf=0, minf=32769 00:21:09.987 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.4% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:09.987 issued rwts: total=321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.987 job2: (groupid=0, jobs=1): err= 0: pid=1877353: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=29, BW=29.5MiB/s (30.9MB/s)(310MiB/10518msec) 00:21:09.987 slat (usec): min=110, max=2111.5k, avg=33745.95, stdev=235936.50 00:21:09.987 clat (msec): min=54, max=9282, avg=4119.79, stdev=3879.17 00:21:09.987 lat (msec): min=740, max=9286, avg=4153.54, stdev=3880.21 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 735], 5.00th=[ 751], 10.00th=[ 768], 20.00th=[ 793], 00:21:09.987 | 30.00th=[ 810], 40.00th=[ 818], 50.00th=[ 844], 60.00th=[ 5000], 00:21:09.987 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:21:09.987 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:21:09.987 | 99.99th=[ 9329] 00:21:09.987 bw ( KiB/s): min= 6144, max=172032, per=1.38%, avg=62122.67, stdev=75200.00, samples=6 00:21:09.987 iops : min= 6, max= 168, avg=60.67, stdev=73.44, samples=6 00:21:09.987 lat (msec) : 100=0.32%, 750=4.19%, 1000=50.65%, >=2000=44.84% 00:21:09.987 cpu : usr=0.02%, sys=0.85%, ctx=581, majf=0, minf=32769 00:21:09.987 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.3%, >=64=79.7% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:09.987 issued rwts: total=310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.987 job2: (groupid=0, jobs=1): err= 0: pid=1877355: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=58, BW=58.3MiB/s (61.2MB/s)(588MiB/10078msec) 00:21:09.987 slat (usec): min=44, max=2166.4k, avg=17002.97, stdev=123046.57 00:21:09.987 clat (msec): min=75, max=6029, avg=2104.76, stdev=1924.11 00:21:09.987 lat (msec): min=92, max=6042, avg=2121.76, stdev=1930.26 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 155], 5.00th=[ 464], 10.00th=[ 701], 20.00th=[ 709], 00:21:09.987 | 30.00th=[ 743], 40.00th=[ 902], 50.00th=[ 1469], 60.00th=[ 1754], 00:21:09.987 | 70.00th=[ 1787], 80.00th=[ 5269], 90.00th=[ 5738], 95.00th=[ 6007], 00:21:09.987 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:21:09.987 | 99.99th=[ 6007] 00:21:09.987 bw ( KiB/s): min=10240, max=180224, per=1.61%, avg=72334.62, stdev=44188.04, samples=13 00:21:09.987 iops : min= 10, max= 176, avg=70.62, stdev=43.15, samples=13 00:21:09.987 lat (msec) : 100=0.34%, 250=1.87%, 500=2.89%, 750=30.27%, 1000=8.50% 00:21:09.987 lat (msec) : 2000=33.67%, >=2000=22.45% 00:21:09.987 cpu : usr=0.03%, sys=1.53%, ctx=832, majf=0, minf=32769 00:21:09.987 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.987 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.987 job2: (groupid=0, jobs=1): err= 0: pid=1877356: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=13, BW=13.2MiB/s (13.8MB/s)(140MiB/10644msec) 00:21:09.987 slat (usec): min=52, max=2076.9k, avg=75804.49, stdev=334047.68 00:21:09.987 clat (msec): min=30, max=10575, avg=4477.32, stdev=2683.63 00:21:09.987 lat (msec): min=2049, max=10578, avg=4553.13, stdev=2706.59 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 2056], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2400], 00:21:09.987 | 30.00th=[ 2903], 40.00th=[ 3205], 50.00th=[ 3473], 60.00th=[ 3742], 00:21:09.987 | 70.00th=[ 4077], 80.00th=[ 8423], 90.00th=[ 8557], 95.00th=[10537], 00:21:09.987 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.987 | 99.99th=[10537] 00:21:09.987 bw ( KiB/s): min= 8192, max=16384, per=0.27%, avg=12288.00, stdev=5792.62, samples=2 00:21:09.987 iops : min= 8, max= 16, avg=12.00, stdev= 5.66, samples=2 00:21:09.987 lat (msec) : 50=0.71%, >=2000=99.29% 00:21:09.987 cpu : usr=0.01%, sys=0.80%, ctx=340, majf=0, minf=32769 00:21:09.987 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.7%, 16=11.4%, 32=22.9%, >=64=55.0% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=92.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.1% 00:21:09.987 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.987 job2: (groupid=0, jobs=1): err= 0: pid=1877357: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=68, BW=68.1MiB/s (71.4MB/s)(687MiB/10085msec) 00:21:09.987 slat (usec): min=35, max=2052.8k, avg=14557.82, stdev=78817.59 00:21:09.987 clat (msec): min=79, max=4672, avg=1748.88, stdev=1250.32 00:21:09.987 lat (msec): min=89, max=4692, avg=1763.44, stdev=1255.97 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 142], 5.00th=[ 493], 10.00th=[ 768], 20.00th=[ 894], 00:21:09.987 | 30.00th=[ 1011], 40.00th=[ 1116], 50.00th=[ 1267], 60.00th=[ 1368], 00:21:09.987 | 70.00th=[ 1871], 80.00th=[ 2500], 90.00th=[ 4144], 95.00th=[ 4530], 00:21:09.987 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:21:09.987 | 99.99th=[ 4665] 00:21:09.987 bw ( KiB/s): min=24576, max=161792, per=1.69%, avg=75959.60, stdev=42083.37, samples=15 00:21:09.987 iops : min= 24, max= 158, avg=74.00, stdev=41.13, samples=15 00:21:09.987 lat (msec) : 100=0.29%, 250=2.04%, 500=2.77%, 750=3.64%, 1000=20.96% 00:21:09.987 lat (msec) : 2000=43.67%, >=2000=26.64% 00:21:09.987 cpu : usr=0.01%, sys=1.38%, ctx=1276, majf=0, minf=32769 00:21:09.987 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.987 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.987 job2: (groupid=0, jobs=1): err= 0: pid=1877358: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=3, BW=3922KiB/s (4016kB/s)(41.0MiB/10705msec) 00:21:09.987 slat (usec): min=779, max=2105.7k, avg=259765.10, stdev=675554.32 00:21:09.987 clat (msec): min=54, max=10693, avg=7946.01, stdev=3539.92 00:21:09.987 lat (msec): min=2085, max=10704, avg=8205.78, stdev=3330.92 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 55], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:21:09.987 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10537], 60.00th=[10537], 00:21:09.987 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:21:09.987 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:09.987 | 99.99th=[10671] 00:21:09.987 lat (msec) : 100=2.44%, >=2000=97.56% 00:21:09.987 cpu : usr=0.00%, sys=0.52%, ctx=96, majf=0, minf=10497 00:21:09.987 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.987 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.987 job2: (groupid=0, jobs=1): err= 0: pid=1877360: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=8, BW=9210KiB/s (9431kB/s)(95.0MiB/10563msec) 00:21:09.987 slat (usec): min=652, max=2075.5k, avg=110858.84, stdev=404755.03 00:21:09.987 clat (msec): min=30, max=10549, avg=4006.22, stdev=2325.58 00:21:09.987 lat (msec): min=2049, max=10562, avg=4117.08, stdev=2384.33 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 31], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 2400], 00:21:09.987 | 30.00th=[ 2735], 40.00th=[ 2937], 50.00th=[ 3239], 60.00th=[ 3574], 00:21:09.987 | 70.00th=[ 3943], 80.00th=[ 4245], 90.00th=[ 8490], 95.00th=[10537], 00:21:09.987 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.987 | 99.99th=[10537] 00:21:09.987 lat (msec) : 50=1.05%, >=2000=98.95% 00:21:09.987 cpu : usr=0.02%, sys=0.62%, ctx=314, majf=0, minf=24321 00:21:09.987 IO depths : 1=1.1%, 2=2.1%, 4=4.2%, 8=8.4%, 16=16.8%, 32=33.7%, >=64=33.7% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.987 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.987 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.987 job2: (groupid=0, jobs=1): err= 0: pid=1877361: Mon Jul 15 15:03:24 2024 00:21:09.987 read: IOPS=85, BW=85.2MiB/s (89.4MB/s)(859MiB/10080msec) 00:21:09.987 slat (usec): min=31, max=2071.7k, avg=11640.68, stdev=95511.02 00:21:09.987 clat (msec): min=76, max=4956, avg=1082.58, stdev=973.35 00:21:09.987 lat (msec): min=86, max=4964, avg=1094.22, stdev=981.51 00:21:09.987 clat percentiles (msec): 00:21:09.987 | 1.00th=[ 150], 5.00th=[ 518], 10.00th=[ 600], 20.00th=[ 709], 00:21:09.987 | 30.00th=[ 743], 40.00th=[ 818], 50.00th=[ 852], 60.00th=[ 894], 00:21:09.987 | 70.00th=[ 969], 80.00th=[ 1003], 90.00th=[ 1267], 95.00th=[ 4866], 00:21:09.987 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:21:09.987 | 99.99th=[ 4933] 00:21:09.987 bw ( KiB/s): min=53141, max=221184, per=3.01%, avg=135567.55, stdev=50692.38, samples=11 00:21:09.987 iops : min= 51, max= 216, avg=132.27, stdev=49.69, samples=11 00:21:09.987 lat (msec) : 100=0.23%, 250=1.86%, 500=2.33%, 750=29.45%, 1000=46.22% 00:21:09.987 lat (msec) : 2000=13.62%, >=2000=6.29% 00:21:09.987 cpu : usr=0.06%, sys=1.84%, ctx=1590, majf=0, minf=32769 00:21:09.987 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:21:09.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.987 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.987 issued rwts: total=859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.988 job2: (groupid=0, jobs=1): err= 0: pid=1877362: Mon Jul 15 15:03:24 2024 00:21:09.988 read: IOPS=3, BW=3841KiB/s (3933kB/s)(40.0MiB/10665msec) 00:21:09.988 slat (usec): min=1284, max=2099.9k, avg=265237.31, stdev=677917.56 00:21:09.988 clat (msec): min=54, max=10662, avg=8231.96, stdev=3277.72 00:21:09.988 lat (msec): min=2083, max=10664, avg=8497.20, stdev=3018.00 00:21:09.988 clat percentiles (msec): 00:21:09.988 | 1.00th=[ 55], 5.00th=[ 2089], 10.00th=[ 2165], 20.00th=[ 4329], 00:21:09.988 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10537], 60.00th=[10537], 00:21:09.988 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:21:09.988 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:09.988 | 99.99th=[10671] 00:21:09.988 lat (msec) : 100=2.50%, >=2000=97.50% 00:21:09.988 cpu : usr=0.00%, sys=0.47%, ctx=83, majf=0, minf=10241 00:21:09.988 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:21:09.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.988 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.988 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.988 job2: (groupid=0, jobs=1): err= 0: pid=1877363: Mon Jul 15 15:03:24 2024 00:21:09.988 read: IOPS=78, BW=78.7MiB/s (82.5MB/s)(788MiB/10012msec) 00:21:09.988 slat (usec): min=25, max=2067.0k, avg=12684.80, stdev=99840.71 00:21:09.988 clat (msec): min=10, max=5670, avg=834.12, stdev=455.42 00:21:09.988 lat (msec): min=17, max=5684, avg=846.80, stdev=487.69 00:21:09.988 clat percentiles (msec): 00:21:09.988 | 1.00th=[ 28], 5.00th=[ 118], 10.00th=[ 405], 20.00th=[ 659], 00:21:09.988 | 30.00th=[ 684], 40.00th=[ 693], 50.00th=[ 751], 60.00th=[ 869], 00:21:09.988 | 70.00th=[ 969], 80.00th=[ 1099], 90.00th=[ 1217], 95.00th=[ 1301], 00:21:09.988 | 99.00th=[ 1603], 99.50th=[ 3775], 99.90th=[ 5671], 99.95th=[ 5671], 00:21:09.988 | 99.99th=[ 5671] 00:21:09.988 bw ( KiB/s): min=86016, max=200704, per=3.23%, avg=145152.00, stdev=44134.79, samples=8 00:21:09.988 iops : min= 84, max= 196, avg=141.75, stdev=43.10, samples=8 00:21:09.988 lat (msec) : 20=0.51%, 50=1.65%, 100=2.03%, 250=3.68%, 500=4.06% 00:21:09.988 lat (msec) : 750=37.94%, 1000=22.21%, 2000=27.16%, >=2000=0.76% 00:21:09.988 cpu : usr=0.07%, sys=1.47%, ctx=1603, majf=0, minf=32769 00:21:09.988 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:21:09.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.988 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.988 issued rwts: total=788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.988 job2: (groupid=0, jobs=1): err= 0: pid=1877364: Mon Jul 15 15:03:24 2024 00:21:09.988 read: IOPS=150, BW=151MiB/s (158MB/s)(1628MiB/10817msec) 00:21:09.988 slat (usec): min=26, max=1993.3k, avg=6622.45, stdev=68066.37 00:21:09.988 clat (msec): min=28, max=6260, avg=818.25, stdev=1164.18 00:21:09.988 lat (msec): min=198, max=6279, avg=824.88, stdev=1171.78 00:21:09.988 clat percentiles (msec): 00:21:09.988 | 1.00th=[ 201], 5.00th=[ 203], 10.00th=[ 203], 20.00th=[ 205], 00:21:09.988 | 30.00th=[ 207], 40.00th=[ 211], 50.00th=[ 355], 60.00th=[ 502], 00:21:09.988 | 70.00th=[ 625], 80.00th=[ 676], 90.00th=[ 2869], 95.00th=[ 3943], 00:21:09.988 | 99.00th=[ 4329], 99.50th=[ 6208], 99.90th=[ 6275], 99.95th=[ 6275], 00:21:09.988 | 99.99th=[ 6275] 00:21:09.988 bw ( KiB/s): min=34816, max=638976, per=4.88%, avg=219393.50, stdev=211930.21, samples=14 00:21:09.988 iops : min= 34, max= 624, avg=214.21, stdev=206.96, samples=14 00:21:09.988 lat (msec) : 50=0.06%, 250=45.70%, 500=14.25%, 750=25.12%, >=2000=14.86% 00:21:09.988 cpu : usr=0.06%, sys=2.30%, ctx=2298, majf=0, minf=32769 00:21:09.988 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:21:09.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.988 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.988 issued rwts: total=1628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.988 job2: (groupid=0, jobs=1): err= 0: pid=1877365: Mon Jul 15 15:03:24 2024 00:21:09.988 read: IOPS=113, BW=113MiB/s (119MB/s)(1217MiB/10725msec) 00:21:09.988 slat (usec): min=27, max=1453.9k, avg=8784.51, stdev=43969.08 00:21:09.988 clat (msec): min=27, max=4122, avg=1077.32, stdev=552.20 00:21:09.988 lat (msec): min=486, max=4135, avg=1086.11, stdev=553.94 00:21:09.988 clat percentiles (msec): 00:21:09.988 | 1.00th=[ 518], 5.00th=[ 584], 10.00th=[ 600], 20.00th=[ 625], 00:21:09.988 | 30.00th=[ 676], 40.00th=[ 776], 50.00th=[ 802], 60.00th=[ 1020], 00:21:09.988 | 70.00th=[ 1250], 80.00th=[ 1569], 90.00th=[ 2022], 95.00th=[ 2140], 00:21:09.988 | 99.00th=[ 2467], 99.50th=[ 2534], 99.90th=[ 4044], 99.95th=[ 4111], 00:21:09.988 | 99.99th=[ 4111] 00:21:09.988 bw ( KiB/s): min=40960, max=235049, per=2.75%, avg=123860.50, stdev=60146.06, samples=18 00:21:09.988 iops : min= 40, max= 229, avg=120.83, stdev=58.75, samples=18 00:21:09.988 lat (msec) : 50=0.08%, 500=0.16%, 750=36.73%, 1000=21.77%, 2000=30.73% 00:21:09.988 lat (msec) : >=2000=10.52% 00:21:09.988 cpu : usr=0.07%, sys=1.81%, ctx=2090, majf=0, minf=32769 00:21:09.988 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:21:09.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.988 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.988 issued rwts: total=1217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.988 job3: (groupid=0, jobs=1): err= 0: pid=1877375: Mon Jul 15 15:03:24 2024 00:21:09.988 read: IOPS=42, BW=42.7MiB/s (44.7MB/s)(460MiB/10779msec) 00:21:09.988 slat (usec): min=38, max=2131.4k, avg=23312.55, stdev=142209.84 00:21:09.988 clat (msec): min=52, max=7564, avg=2842.00, stdev=2238.64 00:21:09.988 lat (msec): min=910, max=7570, avg=2865.31, stdev=2243.38 00:21:09.988 clat percentiles (msec): 00:21:09.988 | 1.00th=[ 902], 5.00th=[ 927], 10.00th=[ 953], 20.00th=[ 1028], 00:21:09.988 | 30.00th=[ 1083], 40.00th=[ 1351], 50.00th=[ 1770], 60.00th=[ 2089], 00:21:09.988 | 70.00th=[ 3977], 80.00th=[ 4463], 90.00th=[ 6812], 95.00th=[ 7215], 00:21:09.988 | 99.00th=[ 7483], 99.50th=[ 7550], 99.90th=[ 7550], 99.95th=[ 7550], 00:21:09.988 | 99.99th=[ 7550] 00:21:09.988 bw ( KiB/s): min= 2043, max=151552, per=1.26%, avg=56652.67, stdev=45145.42, samples=12 00:21:09.988 iops : min= 1, max= 148, avg=55.17, stdev=44.21, samples=12 00:21:09.988 lat (msec) : 100=0.22%, 1000=16.96%, 2000=38.48%, >=2000=44.35% 00:21:09.988 cpu : usr=0.03%, sys=1.67%, ctx=1117, majf=0, minf=32769 00:21:09.988 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:21:09.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.988 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:09.988 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.988 job3: (groupid=0, jobs=1): err= 0: pid=1877377: Mon Jul 15 15:03:24 2024 00:21:09.988 read: IOPS=150, BW=150MiB/s (158MB/s)(1520MiB/10109msec) 00:21:09.988 slat (usec): min=31, max=2053.5k, avg=6581.01, stdev=53117.60 00:21:09.988 clat (msec): min=97, max=4060, avg=795.73, stdev=896.76 00:21:09.988 lat (msec): min=108, max=4065, avg=802.32, stdev=901.05 00:21:09.988 clat percentiles (msec): 00:21:09.988 | 1.00th=[ 197], 5.00th=[ 268], 10.00th=[ 271], 20.00th=[ 275], 00:21:09.988 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 321], 60.00th=[ 634], 00:21:09.988 | 70.00th=[ 860], 80.00th=[ 1099], 90.00th=[ 1620], 95.00th=[ 3373], 00:21:09.988 | 99.00th=[ 4010], 99.50th=[ 4044], 99.90th=[ 4077], 99.95th=[ 4077], 00:21:09.988 | 99.99th=[ 4077] 00:21:09.988 bw ( KiB/s): min=14336, max=474187, per=4.20%, avg=188943.53, stdev=159728.45, samples=15 00:21:09.988 iops : min= 14, max= 463, avg=184.40, stdev=156.05, samples=15 00:21:09.988 lat (msec) : 100=0.07%, 250=1.58%, 500=55.53%, 750=5.07%, 1000=15.53% 00:21:09.988 lat (msec) : 2000=13.88%, >=2000=8.36% 00:21:09.988 cpu : usr=0.10%, sys=1.91%, ctx=1777, majf=0, minf=32769 00:21:09.988 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.9% 00:21:09.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.988 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.988 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.988 job3: (groupid=0, jobs=1): err= 0: pid=1877378: Mon Jul 15 15:03:24 2024 00:21:09.988 read: IOPS=70, BW=70.9MiB/s (74.4MB/s)(715MiB/10083msec) 00:21:09.988 slat (usec): min=36, max=1973.2k, avg=13991.11, stdev=75719.69 00:21:09.988 clat (msec): min=75, max=3017, avg=1685.13, stdev=825.62 00:21:09.988 lat (msec): min=147, max=3018, avg=1699.12, stdev=827.49 00:21:09.988 clat percentiles (msec): 00:21:09.988 | 1.00th=[ 153], 5.00th=[ 326], 10.00th=[ 502], 20.00th=[ 793], 00:21:09.988 | 30.00th=[ 1301], 40.00th=[ 1519], 50.00th=[ 1670], 60.00th=[ 1938], 00:21:09.988 | 70.00th=[ 2198], 80.00th=[ 2333], 90.00th=[ 2903], 95.00th=[ 2937], 00:21:09.988 | 99.00th=[ 3004], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3004], 00:21:09.988 | 99.99th=[ 3004] 00:21:09.988 bw ( KiB/s): min=32768, max=169984, per=1.78%, avg=80260.07, stdev=45115.37, samples=15 00:21:09.988 iops : min= 32, max= 166, avg=78.33, stdev=43.97, samples=15 00:21:09.988 lat (msec) : 100=0.14%, 250=4.20%, 500=5.31%, 750=8.11%, 1000=8.81% 00:21:09.988 lat (msec) : 2000=35.66%, >=2000=37.76% 00:21:09.989 cpu : usr=0.02%, sys=1.86%, ctx=1572, majf=0, minf=32769 00:21:09.989 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.989 issued rwts: total=715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877379: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=47, BW=47.0MiB/s (49.3MB/s)(506MiB/10765msec) 00:21:09.989 slat (usec): min=57, max=2065.3k, avg=21160.11, stdev=157580.11 00:21:09.989 clat (msec): min=53, max=5010, avg=2017.46, stdev=1774.66 00:21:09.989 lat (msec): min=512, max=5014, avg=2038.62, stdev=1778.79 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 510], 5.00th=[ 514], 10.00th=[ 518], 20.00th=[ 535], 00:21:09.989 | 30.00th=[ 575], 40.00th=[ 609], 50.00th=[ 634], 60.00th=[ 1921], 00:21:09.989 | 70.00th=[ 3205], 80.00th=[ 4665], 90.00th=[ 4799], 95.00th=[ 4933], 00:21:09.989 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:21:09.989 | 99.99th=[ 5000] 00:21:09.989 bw ( KiB/s): min= 8192, max=249357, per=2.15%, avg=96705.62, stdev=101892.96, samples=8 00:21:09.989 iops : min= 8, max= 243, avg=94.38, stdev=99.40, samples=8 00:21:09.989 lat (msec) : 100=0.20%, 750=51.98%, 1000=0.79%, 2000=7.71%, >=2000=39.33% 00:21:09.989 cpu : usr=0.02%, sys=1.89%, ctx=625, majf=0, minf=32769 00:21:09.989 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:09.989 issued rwts: total=506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877380: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=7, BW=7652KiB/s (7836kB/s)(81.0MiB/10839msec) 00:21:09.989 slat (usec): min=559, max=2125.4k, avg=133102.30, stdev=491728.34 00:21:09.989 clat (msec): min=56, max=10830, avg=9886.79, stdev=2180.12 00:21:09.989 lat (msec): min=2084, max=10837, avg=10019.89, stdev=1881.04 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 57], 5.00th=[ 4329], 10.00th=[ 6477], 20.00th=[10537], 00:21:09.989 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:21:09.989 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:21:09.989 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:21:09.989 | 99.99th=[10805] 00:21:09.989 lat (msec) : 100=1.23%, >=2000=98.77% 00:21:09.989 cpu : usr=0.00%, sys=1.11%, ctx=143, majf=0, minf=20737 00:21:09.989 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.989 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877381: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=7, BW=7194KiB/s (7367kB/s)(76.0MiB/10818msec) 00:21:09.989 slat (usec): min=641, max=2144.6k, avg=141581.84, stdev=510702.43 00:21:09.989 clat (msec): min=56, max=10816, avg=10026.50, stdev=2069.63 00:21:09.989 lat (msec): min=2084, max=10816, avg=10168.09, stdev=1716.39 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 57], 5.00th=[ 4329], 10.00th=[ 8557], 20.00th=[10537], 00:21:09.989 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:21:09.989 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:21:09.989 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:21:09.989 | 99.99th=[10805] 00:21:09.989 lat (msec) : 100=1.32%, >=2000=98.68% 00:21:09.989 cpu : usr=0.00%, sys=1.12%, ctx=136, majf=0, minf=19457 00:21:09.989 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.989 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877382: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=3, BW=3792KiB/s (3883kB/s)(39.0MiB/10531msec) 00:21:09.989 slat (msec): min=2, max=2120, avg=268.52, stdev=686.91 00:21:09.989 clat (msec): min=58, max=10519, avg=4591.95, stdev=3005.09 00:21:09.989 lat (msec): min=2074, max=10530, avg=4860.48, stdev=3056.75 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 58], 5.00th=[ 2072], 10.00th=[ 2089], 20.00th=[ 2140], 00:21:09.989 | 30.00th=[ 2165], 40.00th=[ 2165], 50.00th=[ 4279], 60.00th=[ 4279], 00:21:09.989 | 70.00th=[ 4329], 80.00th=[ 8557], 90.00th=[10402], 95.00th=[10537], 00:21:09.989 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.989 | 99.99th=[10537] 00:21:09.989 lat (msec) : 100=2.56%, >=2000=97.44% 00:21:09.989 cpu : usr=0.00%, sys=0.23%, ctx=83, majf=0, minf=9985 00:21:09.989 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.989 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877383: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=85, BW=85.9MiB/s (90.1MB/s)(919MiB/10696msec) 00:21:09.989 slat (usec): min=28, max=2031.3k, avg=11574.32, stdev=96398.21 00:21:09.989 clat (msec): min=53, max=4250, avg=1352.70, stdev=1328.98 00:21:09.989 lat (msec): min=304, max=4453, avg=1364.27, stdev=1335.81 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 305], 5.00th=[ 309], 10.00th=[ 330], 20.00th=[ 368], 00:21:09.989 | 30.00th=[ 393], 40.00th=[ 426], 50.00th=[ 642], 60.00th=[ 919], 00:21:09.989 | 70.00th=[ 1099], 80.00th=[ 3104], 90.00th=[ 3440], 95.00th=[ 4111], 00:21:09.989 | 99.00th=[ 4212], 99.50th=[ 4245], 99.90th=[ 4245], 99.95th=[ 4245], 00:21:09.989 | 99.99th=[ 4245] 00:21:09.989 bw ( KiB/s): min=16384, max=403456, per=3.60%, avg=161970.20, stdev=129969.73, samples=10 00:21:09.989 iops : min= 16, max= 394, avg=158.10, stdev=126.94, samples=10 00:21:09.989 lat (msec) : 100=0.11%, 500=47.44%, 750=7.07%, 1000=10.88%, 2000=5.55% 00:21:09.989 lat (msec) : >=2000=28.94% 00:21:09.989 cpu : usr=0.00%, sys=1.85%, ctx=1487, majf=0, minf=32769 00:21:09.989 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.989 issued rwts: total=919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877384: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=54, BW=54.3MiB/s (56.9MB/s)(575MiB/10592msec) 00:21:09.989 slat (usec): min=486, max=2211.9k, avg=18326.95, stdev=115140.20 00:21:09.989 clat (msec): min=50, max=5638, avg=2221.26, stdev=1474.92 00:21:09.989 lat (msec): min=829, max=5643, avg=2239.58, stdev=1475.91 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 827], 5.00th=[ 844], 10.00th=[ 869], 20.00th=[ 1003], 00:21:09.989 | 30.00th=[ 1318], 40.00th=[ 1536], 50.00th=[ 1703], 60.00th=[ 1905], 00:21:09.989 | 70.00th=[ 2123], 80.00th=[ 3876], 90.00th=[ 5067], 95.00th=[ 5403], 00:21:09.989 | 99.00th=[ 5604], 99.50th=[ 5604], 99.90th=[ 5671], 99.95th=[ 5671], 00:21:09.989 | 99.99th=[ 5671] 00:21:09.989 bw ( KiB/s): min=14336, max=153600, per=1.56%, avg=70419.69, stdev=39839.95, samples=13 00:21:09.989 iops : min= 14, max= 150, avg=68.77, stdev=38.91, samples=13 00:21:09.989 lat (msec) : 100=0.17%, 1000=19.65%, 2000=44.87%, >=2000=35.30% 00:21:09.989 cpu : usr=0.03%, sys=1.02%, ctx=1783, majf=0, minf=32769 00:21:09.989 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.989 issued rwts: total=575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877385: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=49, BW=49.9MiB/s (52.4MB/s)(526MiB/10535msec) 00:21:09.989 slat (usec): min=31, max=2162.8k, avg=19917.92, stdev=129071.47 00:21:09.989 clat (msec): min=55, max=6393, avg=2418.66, stdev=1691.46 00:21:09.989 lat (msec): min=821, max=6400, avg=2438.58, stdev=1694.26 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 818], 5.00th=[ 835], 10.00th=[ 860], 20.00th=[ 919], 00:21:09.989 | 30.00th=[ 1133], 40.00th=[ 1720], 50.00th=[ 2022], 60.00th=[ 2140], 00:21:09.989 | 70.00th=[ 2198], 80.00th=[ 4597], 90.00th=[ 5403], 95.00th=[ 5873], 00:21:09.989 | 99.00th=[ 6275], 99.50th=[ 6342], 99.90th=[ 6409], 99.95th=[ 6409], 00:21:09.989 | 99.99th=[ 6409] 00:21:09.989 bw ( KiB/s): min= 6144, max=153600, per=1.39%, avg=62718.62, stdev=40760.92, samples=13 00:21:09.989 iops : min= 6, max= 150, avg=61.23, stdev=39.78, samples=13 00:21:09.989 lat (msec) : 100=0.19%, 1000=26.62%, 2000=21.29%, >=2000=51.90% 00:21:09.989 cpu : usr=0.01%, sys=0.96%, ctx=1553, majf=0, minf=32769 00:21:09.989 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:21:09.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.989 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.989 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.989 job3: (groupid=0, jobs=1): err= 0: pid=1877386: Mon Jul 15 15:03:24 2024 00:21:09.989 read: IOPS=74, BW=74.9MiB/s (78.6MB/s)(789MiB/10532msec) 00:21:09.989 slat (usec): min=23, max=2147.5k, avg=13275.96, stdev=106605.92 00:21:09.989 clat (msec): min=52, max=5598, avg=1609.28, stdev=1485.94 00:21:09.989 lat (msec): min=484, max=5607, avg=1622.55, stdev=1490.21 00:21:09.989 clat percentiles (msec): 00:21:09.989 | 1.00th=[ 489], 5.00th=[ 550], 10.00th=[ 584], 20.00th=[ 600], 00:21:09.989 | 30.00th=[ 667], 40.00th=[ 684], 50.00th=[ 751], 60.00th=[ 1267], 00:21:09.989 | 70.00th=[ 1754], 80.00th=[ 2106], 90.00th=[ 4597], 95.00th=[ 4933], 00:21:09.989 | 99.00th=[ 5470], 99.50th=[ 5537], 99.90th=[ 5604], 99.95th=[ 5604], 00:21:09.990 | 99.99th=[ 5604] 00:21:09.990 bw ( KiB/s): min= 2048, max=253445, per=2.31%, avg=104093.92, stdev=75326.16, samples=13 00:21:09.990 iops : min= 2, max= 247, avg=101.62, stdev=73.48, samples=13 00:21:09.990 lat (msec) : 100=0.13%, 500=1.01%, 750=49.18%, 1000=7.10%, 2000=20.66% 00:21:09.990 lat (msec) : >=2000=21.93% 00:21:09.990 cpu : usr=0.02%, sys=1.08%, ctx=1253, majf=0, minf=32769 00:21:09.990 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.990 issued rwts: total=789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job3: (groupid=0, jobs=1): err= 0: pid=1877388: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=174, BW=174MiB/s (183MB/s)(1756MiB/10083msec) 00:21:09.990 slat (usec): min=33, max=319562, avg=5709.29, stdev=12469.31 00:21:09.990 clat (msec): min=43, max=1109, avg=706.43, stdev=143.59 00:21:09.990 lat (msec): min=94, max=1121, avg=712.14, stdev=144.24 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 180], 5.00th=[ 535], 10.00th=[ 600], 20.00th=[ 617], 00:21:09.990 | 30.00th=[ 659], 40.00th=[ 701], 50.00th=[ 709], 60.00th=[ 718], 00:21:09.990 | 70.00th=[ 743], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 995], 00:21:09.990 | 99.00th=[ 1053], 99.50th=[ 1070], 99.90th=[ 1099], 99.95th=[ 1116], 00:21:09.990 | 99.99th=[ 1116] 00:21:09.990 bw ( KiB/s): min=94208, max=219136, per=3.90%, avg=175441.37, stdev=30658.79, samples=19 00:21:09.990 iops : min= 92, max= 214, avg=171.26, stdev=29.91, samples=19 00:21:09.990 lat (msec) : 50=0.06%, 100=0.28%, 250=1.48%, 500=2.62%, 750=66.46% 00:21:09.990 lat (msec) : 1000=24.20%, 2000=4.90% 00:21:09.990 cpu : usr=0.12%, sys=3.30%, ctx=1634, majf=0, minf=32769 00:21:09.990 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.990 issued rwts: total=1756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job3: (groupid=0, jobs=1): err= 0: pid=1877389: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=35, BW=35.7MiB/s (37.4MB/s)(376MiB/10538msec) 00:21:09.990 slat (usec): min=24, max=2128.1k, avg=27886.48, stdev=182738.59 00:21:09.990 clat (msec): min=50, max=5925, avg=2589.00, stdev=1806.85 00:21:09.990 lat (msec): min=606, max=5960, avg=2616.89, stdev=1811.81 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 609], 5.00th=[ 735], 10.00th=[ 743], 20.00th=[ 995], 00:21:09.990 | 30.00th=[ 1217], 40.00th=[ 1502], 50.00th=[ 1636], 60.00th=[ 1821], 00:21:09.990 | 70.00th=[ 4396], 80.00th=[ 4866], 90.00th=[ 5336], 95.00th=[ 5604], 00:21:09.990 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:21:09.990 | 99.99th=[ 5940] 00:21:09.990 bw ( KiB/s): min=12288, max=145408, per=1.41%, avg=63488.00, stdev=44142.43, samples=8 00:21:09.990 iops : min= 12, max= 142, avg=62.00, stdev=43.11, samples=8 00:21:09.990 lat (msec) : 100=0.27%, 750=10.11%, 1000=11.17%, 2000=38.56%, >=2000=39.89% 00:21:09.990 cpu : usr=0.00%, sys=0.87%, ctx=1082, majf=0, minf=32769 00:21:09.990 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:09.990 issued rwts: total=376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job4: (groupid=0, jobs=1): err= 0: pid=1877406: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=40, BW=40.7MiB/s (42.6MB/s)(408MiB/10035msec) 00:21:09.990 slat (usec): min=39, max=4208.4k, avg=24502.47, stdev=232736.23 00:21:09.990 clat (msec): min=34, max=8360, avg=2871.49, stdev=3414.58 00:21:09.990 lat (msec): min=35, max=8370, avg=2895.99, stdev=3423.88 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 45], 5.00th=[ 94], 10.00th=[ 180], 20.00th=[ 393], 00:21:09.990 | 30.00th=[ 575], 40.00th=[ 609], 50.00th=[ 617], 60.00th=[ 667], 00:21:09.990 | 70.00th=[ 7483], 80.00th=[ 7819], 90.00th=[ 8154], 95.00th=[ 8288], 00:21:09.990 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:21:09.990 | 99.99th=[ 8356] 00:21:09.990 bw ( KiB/s): min= 6144, max=215040, per=1.57%, avg=70857.40, stdev=86099.51, samples=5 00:21:09.990 iops : min= 6, max= 210, avg=69.00, stdev=84.26, samples=5 00:21:09.990 lat (msec) : 50=2.70%, 100=3.43%, 250=7.35%, 500=11.76%, 750=35.54% 00:21:09.990 lat (msec) : 1000=1.23%, 2000=5.88%, >=2000=32.11% 00:21:09.990 cpu : usr=0.04%, sys=1.16%, ctx=722, majf=0, minf=32769 00:21:09.990 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.8%, >=64=84.6% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:09.990 issued rwts: total=408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job4: (groupid=0, jobs=1): err= 0: pid=1877407: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=58, BW=58.9MiB/s (61.7MB/s)(633MiB/10754msec) 00:21:09.990 slat (usec): min=30, max=2020.1k, avg=16900.57, stdev=85399.55 00:21:09.990 clat (msec): min=51, max=4667, avg=2022.62, stdev=938.10 00:21:09.990 lat (msec): min=1248, max=4667, avg=2039.52, stdev=936.56 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 1250], 5.00th=[ 1284], 10.00th=[ 1301], 20.00th=[ 1351], 00:21:09.990 | 30.00th=[ 1418], 40.00th=[ 1519], 50.00th=[ 1670], 60.00th=[ 1871], 00:21:09.990 | 70.00th=[ 1955], 80.00th=[ 2601], 90.00th=[ 3742], 95.00th=[ 4396], 00:21:09.990 | 99.00th=[ 4597], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:21:09.990 | 99.99th=[ 4665] 00:21:09.990 bw ( KiB/s): min=14336, max=112640, per=1.53%, avg=68925.87, stdev=29344.03, samples=15 00:21:09.990 iops : min= 14, max= 110, avg=67.20, stdev=28.58, samples=15 00:21:09.990 lat (msec) : 100=0.16%, 2000=77.41%, >=2000=22.43% 00:21:09.990 cpu : usr=0.04%, sys=1.66%, ctx=1426, majf=0, minf=32769 00:21:09.990 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.990 issued rwts: total=633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job4: (groupid=0, jobs=1): err= 0: pid=1877408: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=44, BW=44.1MiB/s (46.3MB/s)(473MiB/10722msec) 00:21:09.990 slat (usec): min=45, max=2087.3k, avg=22533.11, stdev=178061.38 00:21:09.990 clat (msec): min=60, max=8842, avg=2776.45, stdev=2989.55 00:21:09.990 lat (msec): min=599, max=8848, avg=2798.98, stdev=2998.78 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 609], 5.00th=[ 617], 10.00th=[ 634], 20.00th=[ 651], 00:21:09.990 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 718], 60.00th=[ 1720], 00:21:09.990 | 70.00th=[ 2836], 80.00th=[ 7080], 90.00th=[ 7349], 95.00th=[ 8792], 00:21:09.990 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:21:09.990 | 99.99th=[ 8792] 00:21:09.990 bw ( KiB/s): min=24576, max=190464, per=1.96%, avg=88320.00, stdev=69178.22, samples=8 00:21:09.990 iops : min= 24, max= 186, avg=86.25, stdev=67.56, samples=8 00:21:09.990 lat (msec) : 100=0.21%, 750=56.03%, 2000=9.94%, >=2000=33.83% 00:21:09.990 cpu : usr=0.01%, sys=1.45%, ctx=919, majf=0, minf=32769 00:21:09.990 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:09.990 issued rwts: total=473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job4: (groupid=0, jobs=1): err= 0: pid=1877409: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=8, BW=9133KiB/s (9353kB/s)(95.0MiB/10651msec) 00:21:09.990 slat (usec): min=413, max=2119.5k, avg=111462.04, stdev=424670.19 00:21:09.990 clat (msec): min=60, max=10649, avg=8126.75, stdev=3054.57 00:21:09.990 lat (msec): min=1793, max=10649, avg=8238.21, stdev=2948.47 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 61], 5.00th=[ 1854], 10.00th=[ 2165], 20.00th=[ 4279], 00:21:09.990 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10000], 60.00th=[10268], 00:21:09.990 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10671], 00:21:09.990 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:09.990 | 99.99th=[10671] 00:21:09.990 lat (msec) : 100=1.05%, 2000=5.26%, >=2000=93.68% 00:21:09.990 cpu : usr=0.00%, sys=0.63%, ctx=251, majf=0, minf=24321 00:21:09.990 IO depths : 1=1.1%, 2=2.1%, 4=4.2%, 8=8.4%, 16=16.8%, 32=33.7%, >=64=33.7% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.990 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job4: (groupid=0, jobs=1): err= 0: pid=1877410: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=108, BW=109MiB/s (114MB/s)(1095MiB/10063msec) 00:21:09.990 slat (usec): min=27, max=78582, avg=9129.22, stdev=14046.51 00:21:09.990 clat (msec): min=61, max=2201, avg=1088.32, stdev=462.18 00:21:09.990 lat (msec): min=62, max=2205, avg=1097.45, stdev=464.76 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 78], 5.00th=[ 330], 10.00th=[ 659], 20.00th=[ 718], 00:21:09.990 | 30.00th=[ 768], 40.00th=[ 852], 50.00th=[ 1083], 60.00th=[ 1150], 00:21:09.990 | 70.00th=[ 1318], 80.00th=[ 1469], 90.00th=[ 1737], 95.00th=[ 1972], 00:21:09.990 | 99.00th=[ 2165], 99.50th=[ 2165], 99.90th=[ 2198], 99.95th=[ 2198], 00:21:09.990 | 99.99th=[ 2198] 00:21:09.990 bw ( KiB/s): min= 4096, max=188416, per=2.45%, avg=110123.89, stdev=50709.47, samples=18 00:21:09.990 iops : min= 4, max= 184, avg=107.50, stdev=49.52, samples=18 00:21:09.990 lat (msec) : 100=1.00%, 250=2.83%, 500=3.84%, 750=17.63%, 1000=18.81% 00:21:09.990 lat (msec) : 2000=51.51%, >=2000=4.38% 00:21:09.990 cpu : usr=0.03%, sys=1.98%, ctx=1760, majf=0, minf=32769 00:21:09.990 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:21:09.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.990 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.990 issued rwts: total=1095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.990 job4: (groupid=0, jobs=1): err= 0: pid=1877411: Mon Jul 15 15:03:24 2024 00:21:09.990 read: IOPS=3, BW=3892KiB/s (3985kB/s)(40.0MiB/10525msec) 00:21:09.990 slat (usec): min=1643, max=2125.2k, avg=262870.00, stdev=649990.88 00:21:09.990 clat (msec): min=9, max=10478, avg=3395.84, stdev=2911.79 00:21:09.990 lat (msec): min=1489, max=10524, avg=3658.71, stdev=3068.68 00:21:09.990 clat percentiles (msec): 00:21:09.990 | 1.00th=[ 10], 5.00th=[ 1485], 10.00th=[ 1519], 20.00th=[ 1569], 00:21:09.990 | 30.00th=[ 1636], 40.00th=[ 1687], 50.00th=[ 1787], 60.00th=[ 1921], 00:21:09.991 | 70.00th=[ 2106], 80.00th=[ 6409], 90.00th=[ 8490], 95.00th=[ 8557], 00:21:09.991 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:09.991 | 99.99th=[10537] 00:21:09.991 lat (msec) : 10=2.50%, 2000=57.50%, >=2000=40.00% 00:21:09.991 cpu : usr=0.00%, sys=0.21%, ctx=184, majf=0, minf=10241 00:21:09.991 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:09.991 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job4: (groupid=0, jobs=1): err= 0: pid=1877412: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=7, BW=8009KiB/s (8201kB/s)(83.0MiB/10612msec) 00:21:09.991 slat (usec): min=624, max=2109.2k, avg=127400.82, stdev=476752.25 00:21:09.991 clat (msec): min=36, max=10610, avg=6735.58, stdev=2991.07 00:21:09.991 lat (msec): min=2007, max=10611, avg=6862.99, stdev=2926.78 00:21:09.991 clat percentiles (msec): 00:21:09.991 | 1.00th=[ 37], 5.00th=[ 4077], 10.00th=[ 4111], 20.00th=[ 4144], 00:21:09.991 | 30.00th=[ 4212], 40.00th=[ 4245], 50.00th=[ 6409], 60.00th=[ 6409], 00:21:09.991 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:09.991 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:09.991 | 99.99th=[10671] 00:21:09.991 lat (msec) : 50=1.20%, >=2000=98.80% 00:21:09.991 cpu : usr=0.01%, sys=0.57%, ctx=148, majf=0, minf=21249 00:21:09.991 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.991 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job4: (groupid=0, jobs=1): err= 0: pid=1877414: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=6, BW=6850KiB/s (7014kB/s)(71.0MiB/10614msec) 00:21:09.991 slat (usec): min=689, max=2114.2k, avg=148754.04, stdev=508371.20 00:21:09.991 clat (msec): min=51, max=10608, avg=4738.35, stdev=3663.74 00:21:09.991 lat (msec): min=1768, max=10613, avg=4887.10, stdev=3685.08 00:21:09.991 clat percentiles (msec): 00:21:09.991 | 1.00th=[ 52], 5.00th=[ 1787], 10.00th=[ 1838], 20.00th=[ 1921], 00:21:09.991 | 30.00th=[ 2022], 40.00th=[ 2056], 50.00th=[ 2140], 60.00th=[ 4245], 00:21:09.991 | 70.00th=[ 6477], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:09.991 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:09.991 | 99.99th=[10671] 00:21:09.991 lat (msec) : 100=1.41%, 2000=28.17%, >=2000=70.42% 00:21:09.991 cpu : usr=0.02%, sys=0.46%, ctx=146, majf=0, minf=18177 00:21:09.991 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.991 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job4: (groupid=0, jobs=1): err= 0: pid=1877415: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=25, BW=25.5MiB/s (26.8MB/s)(256MiB/10034msec) 00:21:09.991 slat (usec): min=31, max=2094.9k, avg=39069.58, stdev=253052.97 00:21:09.991 clat (msec): min=30, max=9380, avg=760.56, stdev=1314.13 00:21:09.991 lat (msec): min=36, max=9422, avg=799.63, stdev=1421.16 00:21:09.991 clat percentiles (msec): 00:21:09.991 | 1.00th=[ 37], 5.00th=[ 62], 10.00th=[ 118], 20.00th=[ 243], 00:21:09.991 | 30.00th=[ 359], 40.00th=[ 498], 50.00th=[ 584], 60.00th=[ 609], 00:21:09.991 | 70.00th=[ 617], 80.00th=[ 625], 90.00th=[ 827], 95.00th=[ 3205], 00:21:09.991 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 9329], 99.95th=[ 9329], 00:21:09.991 | 99.99th=[ 9329] 00:21:09.991 bw ( KiB/s): min=38912, max=38912, per=0.86%, avg=38912.00, stdev= 0.00, samples=1 00:21:09.991 iops : min= 38, max= 38, avg=38.00, stdev= 0.00, samples=1 00:21:09.991 lat (msec) : 50=3.52%, 100=6.25%, 250=12.11%, 500=19.14%, 750=47.27% 00:21:09.991 lat (msec) : 1000=4.69%, 2000=1.56%, >=2000=5.47% 00:21:09.991 cpu : usr=0.00%, sys=0.79%, ctx=372, majf=0, minf=32769 00:21:09.991 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.4% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:21:09.991 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job4: (groupid=0, jobs=1): err= 0: pid=1877416: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=39, BW=39.6MiB/s (41.6MB/s)(417MiB/10519msec) 00:21:09.991 slat (usec): min=24, max=2119.7k, avg=25065.97, stdev=203897.83 00:21:09.991 clat (msec): min=64, max=6586, avg=1415.48, stdev=1515.22 00:21:09.991 lat (msec): min=305, max=6598, avg=1440.55, stdev=1536.44 00:21:09.991 clat percentiles (msec): 00:21:09.991 | 1.00th=[ 305], 5.00th=[ 309], 10.00th=[ 309], 20.00th=[ 309], 00:21:09.991 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 330], 60.00th=[ 414], 00:21:09.991 | 70.00th=[ 2165], 80.00th=[ 3339], 90.00th=[ 3440], 95.00th=[ 3507], 00:21:09.991 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6611], 99.95th=[ 6611], 00:21:09.991 | 99.99th=[ 6611] 00:21:09.991 bw ( KiB/s): min=40960, max=333824, per=4.38%, avg=197290.67, stdev=147432.29, samples=3 00:21:09.991 iops : min= 40, max= 326, avg=192.67, stdev=143.98, samples=3 00:21:09.991 lat (msec) : 100=0.24%, 500=60.91%, 2000=4.80%, >=2000=34.05% 00:21:09.991 cpu : usr=0.02%, sys=0.71%, ctx=429, majf=0, minf=32769 00:21:09.991 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:09.991 issued rwts: total=417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job4: (groupid=0, jobs=1): err= 0: pid=1877417: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=15, BW=15.2MiB/s (15.9MB/s)(162MiB/10658msec) 00:21:09.991 slat (usec): min=45, max=2113.1k, avg=65405.16, stdev=312624.10 00:21:09.991 clat (msec): min=60, max=8710, avg=5504.00, stdev=2379.56 00:21:09.991 lat (msec): min=1738, max=10388, avg=5569.40, stdev=2373.25 00:21:09.991 clat percentiles (msec): 00:21:09.991 | 1.00th=[ 1737], 5.00th=[ 1821], 10.00th=[ 1888], 20.00th=[ 2072], 00:21:09.991 | 30.00th=[ 5537], 40.00th=[ 5738], 50.00th=[ 5940], 60.00th=[ 6141], 00:21:09.991 | 70.00th=[ 6342], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8658], 00:21:09.991 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:21:09.991 | 99.99th=[ 8658] 00:21:09.991 bw ( KiB/s): min=69632, max=69632, per=1.55%, avg=69632.00, stdev= 0.00, samples=1 00:21:09.991 iops : min= 68, max= 68, avg=68.00, stdev= 0.00, samples=1 00:21:09.991 lat (msec) : 100=0.62%, 2000=14.20%, >=2000=85.19% 00:21:09.991 cpu : usr=0.01%, sys=0.79%, ctx=317, majf=0, minf=32769 00:21:09.991 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.9%, 32=19.8%, >=64=61.1% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=97.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.8% 00:21:09.991 issued rwts: total=162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job4: (groupid=0, jobs=1): err= 0: pid=1877418: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=130, BW=130MiB/s (136MB/s)(1394MiB/10717msec) 00:21:09.991 slat (usec): min=24, max=2048.8k, avg=7655.61, stdev=77899.06 00:21:09.991 clat (msec): min=36, max=3653, avg=934.59, stdev=1032.35 00:21:09.991 lat (msec): min=270, max=3654, avg=942.24, stdev=1036.30 00:21:09.991 clat percentiles (msec): 00:21:09.991 | 1.00th=[ 271], 5.00th=[ 271], 10.00th=[ 271], 20.00th=[ 275], 00:21:09.991 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 279], 60.00th=[ 485], 00:21:09.991 | 70.00th=[ 1053], 80.00th=[ 1167], 90.00th=[ 2970], 95.00th=[ 3239], 00:21:09.991 | 99.00th=[ 3574], 99.50th=[ 3641], 99.90th=[ 3641], 99.95th=[ 3641], 00:21:09.991 | 99.99th=[ 3641] 00:21:09.991 bw ( KiB/s): min=12288, max=477184, per=4.43%, avg=199384.23, stdev=178720.57, samples=13 00:21:09.991 iops : min= 12, max= 466, avg=194.69, stdev=174.51, samples=13 00:21:09.991 lat (msec) : 50=0.07%, 500=60.62%, 750=2.22%, 1000=4.16%, 2000=14.71% 00:21:09.991 lat (msec) : >=2000=18.22% 00:21:09.991 cpu : usr=0.11%, sys=2.16%, ctx=1460, majf=0, minf=32769 00:21:09.991 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.991 issued rwts: total=1394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job4: (groupid=0, jobs=1): err= 0: pid=1877419: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=41, BW=41.3MiB/s (43.4MB/s)(446MiB/10786msec) 00:21:09.991 slat (usec): min=44, max=2110.2k, avg=24041.72, stdev=195323.36 00:21:09.991 clat (msec): min=60, max=9165, avg=2969.28, stdev=3660.77 00:21:09.991 lat (msec): min=536, max=9168, avg=2993.32, stdev=3667.67 00:21:09.991 clat percentiles (msec): 00:21:09.991 | 1.00th=[ 535], 5.00th=[ 550], 10.00th=[ 558], 20.00th=[ 567], 00:21:09.991 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 642], 60.00th=[ 743], 00:21:09.991 | 70.00th=[ 2165], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 9060], 00:21:09.991 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:21:09.991 | 99.99th=[ 9194] 00:21:09.991 bw ( KiB/s): min= 4096, max=235520, per=2.07%, avg=93037.71, stdev=100064.99, samples=7 00:21:09.991 iops : min= 4, max= 230, avg=90.86, stdev=97.72, samples=7 00:21:09.991 lat (msec) : 100=0.22%, 750=60.76%, 1000=8.30%, >=2000=30.72% 00:21:09.991 cpu : usr=0.03%, sys=1.63%, ctx=836, majf=0, minf=32769 00:21:09.991 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:21:09.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.991 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:09.991 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.991 job5: (groupid=0, jobs=1): err= 0: pid=1877427: Mon Jul 15 15:03:24 2024 00:21:09.991 read: IOPS=73, BW=73.5MiB/s (77.0MB/s)(737MiB/10030msec) 00:21:09.992 slat (usec): min=29, max=2044.5k, avg=13564.29, stdev=89730.04 00:21:09.992 clat (msec): min=29, max=4377, avg=1512.12, stdev=1421.25 00:21:09.992 lat (msec): min=30, max=4384, avg=1525.68, stdev=1426.52 00:21:09.992 clat percentiles (msec): 00:21:09.992 | 1.00th=[ 92], 5.00th=[ 288], 10.00th=[ 321], 20.00th=[ 330], 00:21:09.992 | 30.00th=[ 447], 40.00th=[ 642], 50.00th=[ 776], 60.00th=[ 1045], 00:21:09.992 | 70.00th=[ 2089], 80.00th=[ 3138], 90.00th=[ 4178], 95.00th=[ 4245], 00:21:09.992 | 99.00th=[ 4329], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:21:09.992 | 99.99th=[ 4396] 00:21:09.992 bw ( KiB/s): min=18432, max=385024, per=2.34%, avg=105378.91, stdev=115280.26, samples=11 00:21:09.992 iops : min= 18, max= 376, avg=102.91, stdev=112.58, samples=11 00:21:09.992 lat (msec) : 50=0.41%, 100=0.95%, 250=1.90%, 500=30.66%, 750=11.53% 00:21:09.992 lat (msec) : 1000=13.98%, 2000=9.77%, >=2000=30.80% 00:21:09.992 cpu : usr=0.03%, sys=1.05%, ctx=1700, majf=0, minf=32769 00:21:09.992 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:21:09.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.992 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.992 issued rwts: total=737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.992 job5: (groupid=0, jobs=1): err= 0: pid=1877428: Mon Jul 15 15:03:24 2024 00:21:09.992 read: IOPS=74, BW=74.2MiB/s (77.8MB/s)(749MiB/10090msec) 00:21:09.992 slat (usec): min=28, max=2089.2k, avg=13364.92, stdev=107787.99 00:21:09.992 clat (msec): min=75, max=5428, avg=1325.34, stdev=1420.88 00:21:09.992 lat (msec): min=90, max=5434, avg=1338.70, stdev=1428.15 00:21:09.992 clat percentiles (msec): 00:21:09.992 | 1.00th=[ 167], 5.00th=[ 300], 10.00th=[ 305], 20.00th=[ 321], 00:21:09.992 | 30.00th=[ 351], 40.00th=[ 393], 50.00th=[ 793], 60.00th=[ 1250], 00:21:09.992 | 70.00th=[ 1536], 80.00th=[ 2039], 90.00th=[ 2265], 95.00th=[ 5336], 00:21:09.992 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:21:09.992 | 99.99th=[ 5403] 00:21:09.992 bw ( KiB/s): min= 8192, max=411648, per=3.14%, avg=141312.00, stdev=143998.58, samples=9 00:21:09.992 iops : min= 8, max= 402, avg=138.00, stdev=140.62, samples=9 00:21:09.992 lat (msec) : 100=0.27%, 250=0.80%, 500=43.12%, 750=3.34%, 1000=7.61% 00:21:09.992 lat (msec) : 2000=20.16%, >=2000=24.70% 00:21:09.992 cpu : usr=0.08%, sys=1.62%, ctx=1460, majf=0, minf=32769 00:21:09.992 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:21:09.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.992 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.992 issued rwts: total=749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.992 job5: (groupid=0, jobs=1): err= 0: pid=1877430: Mon Jul 15 15:03:24 2024 00:21:09.992 read: IOPS=71, BW=71.5MiB/s (75.0MB/s)(723MiB/10114msec) 00:21:09.992 slat (usec): min=29, max=1912.7k, avg=13849.12, stdev=101093.41 00:21:09.992 clat (msec): min=98, max=8741, avg=1488.04, stdev=1684.94 00:21:09.992 lat (msec): min=177, max=8756, avg=1501.88, stdev=1701.38 00:21:09.992 clat percentiles (msec): 00:21:09.992 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 372], 00:21:09.992 | 30.00th=[ 384], 40.00th=[ 397], 50.00th=[ 447], 60.00th=[ 877], 00:21:09.992 | 70.00th=[ 1485], 80.00th=[ 2668], 90.00th=[ 4732], 95.00th=[ 5067], 00:21:09.992 | 99.00th=[ 5403], 99.50th=[ 5470], 99.90th=[ 8792], 99.95th=[ 8792], 00:21:09.992 | 99.99th=[ 8792] 00:21:09.992 bw ( KiB/s): min= 8192, max=356352, per=2.26%, avg=101557.08, stdev=123487.60, samples=12 00:21:09.992 iops : min= 8, max= 348, avg=99.17, stdev=120.60, samples=12 00:21:09.992 lat (msec) : 100=0.14%, 250=0.41%, 500=53.94%, 750=3.32%, 1000=4.29% 00:21:09.992 lat (msec) : 2000=13.00%, >=2000=24.90% 00:21:09.992 cpu : usr=0.02%, sys=1.48%, ctx=1977, majf=0, minf=32769 00:21:09.992 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:21:09.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.992 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.992 issued rwts: total=723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.992 job5: (groupid=0, jobs=1): err= 0: pid=1877431: Mon Jul 15 15:03:24 2024 00:21:09.992 read: IOPS=103, BW=103MiB/s (108MB/s)(1045MiB/10110msec) 00:21:09.992 slat (usec): min=23, max=2086.2k, avg=9595.60, stdev=87487.47 00:21:09.992 clat (msec): min=78, max=3829, avg=979.36, stdev=951.44 00:21:09.992 lat (msec): min=108, max=3832, avg=988.95, stdev=957.04 00:21:09.992 clat percentiles (msec): 00:21:09.992 | 1.00th=[ 113], 5.00th=[ 144], 10.00th=[ 190], 20.00th=[ 330], 00:21:09.992 | 30.00th=[ 426], 40.00th=[ 477], 50.00th=[ 625], 60.00th=[ 743], 00:21:09.992 | 70.00th=[ 1053], 80.00th=[ 1318], 90.00th=[ 2802], 95.00th=[ 3272], 00:21:09.992 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3842], 99.95th=[ 3842], 00:21:09.992 | 99.99th=[ 3842] 00:21:09.992 bw ( KiB/s): min=36056, max=403456, per=3.79%, avg=170449.73, stdev=129148.01, samples=11 00:21:09.992 iops : min= 35, max= 394, avg=166.36, stdev=126.18, samples=11 00:21:09.992 lat (msec) : 100=0.10%, 250=13.97%, 500=26.99%, 750=19.14%, 1000=7.18% 00:21:09.992 lat (msec) : 2000=18.66%, >=2000=13.97% 00:21:09.992 cpu : usr=0.06%, sys=1.39%, ctx=1991, majf=0, minf=32769 00:21:09.992 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:21:09.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.992 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.992 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.992 job5: (groupid=0, jobs=1): err= 0: pid=1877432: Mon Jul 15 15:03:24 2024 00:21:09.992 read: IOPS=110, BW=110MiB/s (116MB/s)(1118MiB/10144msec) 00:21:09.992 slat (usec): min=27, max=2129.6k, avg=8981.33, stdev=80641.73 00:21:09.992 clat (msec): min=98, max=5885, avg=1115.67, stdev=1505.39 00:21:09.992 lat (msec): min=200, max=5887, avg=1124.65, stdev=1511.06 00:21:09.992 clat percentiles (msec): 00:21:09.992 | 1.00th=[ 275], 5.00th=[ 288], 10.00th=[ 300], 20.00th=[ 338], 00:21:09.992 | 30.00th=[ 372], 40.00th=[ 384], 50.00th=[ 414], 60.00th=[ 634], 00:21:09.992 | 70.00th=[ 986], 80.00th=[ 1083], 90.00th=[ 3473], 95.00th=[ 5470], 00:21:09.992 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:21:09.992 | 99.99th=[ 5873] 00:21:09.992 bw ( KiB/s): min= 8192, max=395264, per=3.46%, avg=155885.00, stdev=143455.19, samples=13 00:21:09.992 iops : min= 8, max= 386, avg=152.15, stdev=140.04, samples=13 00:21:09.992 lat (msec) : 100=0.09%, 250=0.09%, 500=57.07%, 750=5.10%, 1000=9.39% 00:21:09.992 lat (msec) : 2000=15.03%, >=2000=13.24% 00:21:09.992 cpu : usr=0.02%, sys=2.32%, ctx=1827, majf=0, minf=32769 00:21:09.992 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:21:09.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.992 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.992 issued rwts: total=1118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.992 job5: (groupid=0, jobs=1): err= 0: pid=1877433: Mon Jul 15 15:03:24 2024 00:21:09.992 read: IOPS=101, BW=102MiB/s (107MB/s)(1025MiB/10086msec) 00:21:09.992 slat (usec): min=29, max=2068.7k, avg=9751.34, stdev=90047.67 00:21:09.992 clat (msec): min=84, max=4809, avg=971.56, stdev=992.22 00:21:09.992 lat (msec): min=86, max=4848, avg=981.31, stdev=999.17 00:21:09.992 clat percentiles (msec): 00:21:09.992 | 1.00th=[ 165], 5.00th=[ 422], 10.00th=[ 651], 20.00th=[ 693], 00:21:09.992 | 30.00th=[ 701], 40.00th=[ 726], 50.00th=[ 743], 60.00th=[ 768], 00:21:09.992 | 70.00th=[ 793], 80.00th=[ 802], 90.00th=[ 869], 95.00th=[ 4732], 00:21:09.992 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:21:09.992 | 99.99th=[ 4799] 00:21:09.992 bw ( KiB/s): min=96256, max=190464, per=3.72%, avg=167191.27, stdev=27266.45, samples=11 00:21:09.992 iops : min= 94, max= 186, avg=163.27, stdev=26.63, samples=11 00:21:09.992 lat (msec) : 100=0.39%, 250=1.85%, 500=4.10%, 750=45.76%, 1000=41.46% 00:21:09.992 lat (msec) : >=2000=6.44% 00:21:09.992 cpu : usr=0.04%, sys=2.21%, ctx=976, majf=0, minf=32769 00:21:09.992 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:21:09.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.992 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.992 issued rwts: total=1025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.992 job5: (groupid=0, jobs=1): err= 0: pid=1877434: Mon Jul 15 15:03:24 2024 00:21:09.992 read: IOPS=64, BW=64.7MiB/s (67.9MB/s)(650MiB/10039msec) 00:21:09.992 slat (usec): min=32, max=1867.7k, avg=15395.09, stdev=85073.45 00:21:09.992 clat (msec): min=28, max=3803, avg=1497.34, stdev=945.76 00:21:09.992 lat (msec): min=71, max=3833, avg=1512.74, stdev=951.18 00:21:09.992 clat percentiles (msec): 00:21:09.992 | 1.00th=[ 201], 5.00th=[ 634], 10.00th=[ 726], 20.00th=[ 785], 00:21:09.992 | 30.00th=[ 877], 40.00th=[ 927], 50.00th=[ 1028], 60.00th=[ 1083], 00:21:09.993 | 70.00th=[ 1989], 80.00th=[ 2467], 90.00th=[ 3071], 95.00th=[ 3507], 00:21:09.993 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3809], 99.95th=[ 3809], 00:21:09.993 | 99.99th=[ 3809] 00:21:09.993 bw ( KiB/s): min= 6144, max=202752, per=1.75%, avg=78569.15, stdev=58053.15, samples=13 00:21:09.993 iops : min= 6, max= 198, avg=76.54, stdev=56.66, samples=13 00:21:09.993 lat (msec) : 50=0.15%, 100=0.15%, 250=0.92%, 500=2.31%, 750=9.69% 00:21:09.993 lat (msec) : 1000=33.54%, 2000=23.38%, >=2000=29.85% 00:21:09.993 cpu : usr=0.01%, sys=0.95%, ctx=1579, majf=0, minf=32769 00:21:09.993 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:21:09.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.993 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.993 issued rwts: total=650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.993 job5: (groupid=0, jobs=1): err= 0: pid=1877435: Mon Jul 15 15:03:24 2024 00:21:09.993 read: IOPS=6, BW=7115KiB/s (7286kB/s)(70.0MiB/10074msec) 00:21:09.993 slat (usec): min=322, max=2088.6k, avg=142876.22, stdev=472628.27 00:21:09.993 clat (msec): min=72, max=10066, avg=3745.44, stdev=3924.06 00:21:09.993 lat (msec): min=77, max=10073, avg=3888.32, stdev=3970.18 00:21:09.993 clat percentiles (msec): 00:21:09.993 | 1.00th=[ 72], 5.00th=[ 83], 10.00th=[ 100], 20.00th=[ 138], 00:21:09.993 | 30.00th=[ 169], 40.00th=[ 609], 50.00th=[ 1536], 60.00th=[ 3708], 00:21:09.993 | 70.00th=[ 5873], 80.00th=[ 7953], 90.00th=[10000], 95.00th=[10000], 00:21:09.993 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:09.993 | 99.99th=[10134] 00:21:09.993 lat (msec) : 100=10.00%, 250=27.14%, 500=1.43%, 750=4.29%, 2000=8.57% 00:21:09.993 lat (msec) : >=2000=48.57% 00:21:09.993 cpu : usr=0.01%, sys=0.62%, ctx=383, majf=0, minf=17921 00:21:09.993 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:21:09.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.993 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.993 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.993 job5: (groupid=0, jobs=1): err= 0: pid=1877436: Mon Jul 15 15:03:24 2024 00:21:09.993 read: IOPS=299, BW=300MiB/s (315MB/s)(3003MiB/10012msec) 00:21:09.993 slat (usec): min=23, max=1835.1k, avg=3325.70, stdev=34641.83 00:21:09.993 clat (msec): min=10, max=2873, avg=340.61, stdev=330.35 00:21:09.993 lat (msec): min=11, max=2876, avg=343.93, stdev=334.88 00:21:09.993 clat percentiles (msec): 00:21:09.993 | 1.00th=[ 34], 5.00th=[ 96], 10.00th=[ 106], 20.00th=[ 207], 00:21:09.993 | 30.00th=[ 209], 40.00th=[ 211], 50.00th=[ 215], 60.00th=[ 313], 00:21:09.993 | 70.00th=[ 334], 80.00th=[ 409], 90.00th=[ 443], 95.00th=[ 1028], 00:21:09.993 | 99.00th=[ 1418], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:21:09.993 | 99.99th=[ 2869] 00:21:09.993 bw ( KiB/s): min=47104, max=624640, per=7.87%, avg=354304.00, stdev=197431.71, samples=14 00:21:09.993 iops : min= 46, max= 610, avg=346.00, stdev=192.80, samples=14 00:21:09.993 lat (msec) : 20=0.40%, 50=1.37%, 100=7.69%, 250=44.02%, 500=38.59% 00:21:09.993 lat (msec) : 1000=0.83%, 2000=6.33%, >=2000=0.77% 00:21:09.993 cpu : usr=0.17%, sys=2.62%, ctx=3073, majf=0, minf=32769 00:21:09.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:21:09.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.993 issued rwts: total=3003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.993 job5: (groupid=0, jobs=1): err= 0: pid=1877437: Mon Jul 15 15:03:24 2024 00:21:09.993 read: IOPS=142, BW=143MiB/s (150MB/s)(1436MiB/10065msec) 00:21:09.993 slat (usec): min=29, max=2096.1k, avg=6959.78, stdev=57499.61 00:21:09.993 clat (msec): min=62, max=4017, avg=828.04, stdev=891.91 00:21:09.993 lat (msec): min=130, max=4019, avg=835.00, stdev=895.60 00:21:09.993 clat percentiles (msec): 00:21:09.993 | 1.00th=[ 148], 5.00th=[ 414], 10.00th=[ 414], 20.00th=[ 418], 00:21:09.993 | 30.00th=[ 439], 40.00th=[ 518], 50.00th=[ 527], 60.00th=[ 542], 00:21:09.993 | 70.00th=[ 676], 80.00th=[ 743], 90.00th=[ 1267], 95.00th=[ 3574], 00:21:09.993 | 99.00th=[ 3977], 99.50th=[ 3977], 99.90th=[ 4010], 99.95th=[ 4010], 00:21:09.993 | 99.99th=[ 4010] 00:21:09.993 bw ( KiB/s): min= 2048, max=317440, per=3.97%, avg=178652.73, stdev=104675.54, samples=15 00:21:09.993 iops : min= 2, max= 310, avg=174.40, stdev=102.26, samples=15 00:21:09.993 lat (msec) : 100=0.07%, 250=2.09%, 500=31.96%, 750=46.87%, 1000=6.48% 00:21:09.993 lat (msec) : 2000=3.69%, >=2000=8.84% 00:21:09.993 cpu : usr=0.13%, sys=1.58%, ctx=1801, majf=0, minf=32769 00:21:09.993 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:21:09.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.993 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.993 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.993 job5: (groupid=0, jobs=1): err= 0: pid=1877438: Mon Jul 15 15:03:24 2024 00:21:09.993 read: IOPS=110, BW=110MiB/s (116MB/s)(1117MiB/10133msec) 00:21:09.993 slat (usec): min=33, max=2108.7k, avg=8976.29, stdev=89354.43 00:21:09.993 clat (msec): min=98, max=6849, avg=1119.54, stdev=1875.96 00:21:09.993 lat (msec): min=178, max=6854, avg=1128.52, stdev=1883.71 00:21:09.993 clat percentiles (msec): 00:21:09.993 | 1.00th=[ 305], 5.00th=[ 309], 10.00th=[ 309], 20.00th=[ 309], 00:21:09.993 | 30.00th=[ 313], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 388], 00:21:09.993 | 70.00th=[ 506], 80.00th=[ 894], 90.00th=[ 5940], 95.00th=[ 6477], 00:21:09.993 | 99.00th=[ 6745], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6879], 00:21:09.993 | 99.99th=[ 6879] 00:21:09.993 bw ( KiB/s): min= 6144, max=419840, per=3.75%, avg=168704.08, stdev=165790.37, samples=12 00:21:09.993 iops : min= 6, max= 410, avg=164.67, stdev=161.88, samples=12 00:21:09.993 lat (msec) : 100=0.09%, 250=0.18%, 500=69.20%, 750=7.97%, 1000=6.09% 00:21:09.993 lat (msec) : 2000=4.83%, >=2000=11.64% 00:21:09.993 cpu : usr=0.06%, sys=2.55%, ctx=1793, majf=0, minf=32769 00:21:09.993 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:21:09.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.993 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.993 issued rwts: total=1117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.993 job5: (groupid=0, jobs=1): err= 0: pid=1877439: Mon Jul 15 15:03:24 2024 00:21:09.993 read: IOPS=52, BW=52.3MiB/s (54.8MB/s)(528MiB/10101msec) 00:21:09.993 slat (usec): min=55, max=1888.5k, avg=18983.62, stdev=101304.91 00:21:09.993 clat (msec): min=74, max=3434, avg=1980.22, stdev=907.98 00:21:09.993 lat (msec): min=104, max=3466, avg=1999.20, stdev=907.40 00:21:09.993 clat percentiles (msec): 00:21:09.993 | 1.00th=[ 148], 5.00th=[ 401], 10.00th=[ 667], 20.00th=[ 1045], 00:21:09.993 | 30.00th=[ 1200], 40.00th=[ 2022], 50.00th=[ 2265], 60.00th=[ 2400], 00:21:09.993 | 70.00th=[ 2500], 80.00th=[ 2802], 90.00th=[ 3272], 95.00th=[ 3306], 00:21:09.993 | 99.00th=[ 3406], 99.50th=[ 3440], 99.90th=[ 3440], 99.95th=[ 3440], 00:21:09.993 | 99.99th=[ 3440] 00:21:09.993 bw ( KiB/s): min=12288, max=120832, per=1.30%, avg=58496.14, stdev=38168.09, samples=14 00:21:09.993 iops : min= 12, max= 118, avg=57.00, stdev=37.26, samples=14 00:21:09.993 lat (msec) : 100=0.19%, 250=2.46%, 500=4.17%, 750=4.92%, 1000=6.06% 00:21:09.993 lat (msec) : 2000=21.78%, >=2000=60.42% 00:21:09.993 cpu : usr=0.05%, sys=1.40%, ctx=1480, majf=0, minf=32769 00:21:09.993 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.1% 00:21:09.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.993 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:09.993 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.993 job5: (groupid=0, jobs=1): err= 0: pid=1877440: Mon Jul 15 15:03:24 2024 00:21:09.993 read: IOPS=9, BW=9.82MiB/s (10.3MB/s)(99.0MiB/10084msec) 00:21:09.993 slat (usec): min=717, max=2091.4k, avg=101115.80, stdev=396923.34 00:21:09.993 clat (msec): min=72, max=10082, avg=4272.22, stdev=4100.78 00:21:09.993 lat (msec): min=101, max=10082, avg=4373.33, stdev=4119.54 00:21:09.993 clat percentiles (msec): 00:21:09.993 | 1.00th=[ 72], 5.00th=[ 138], 10.00th=[ 292], 20.00th=[ 592], 00:21:09.993 | 30.00th=[ 986], 40.00th=[ 1217], 50.00th=[ 1351], 60.00th=[ 5604], 00:21:09.993 | 70.00th=[ 7819], 80.00th=[10000], 90.00th=[10000], 95.00th=[10134], 00:21:09.993 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:21:09.993 | 99.99th=[10134] 00:21:09.993 lat (msec) : 100=1.01%, 250=8.08%, 500=7.07%, 750=9.09%, 1000=6.06% 00:21:09.993 lat (msec) : 2000=22.22%, >=2000=46.46% 00:21:09.993 cpu : usr=0.01%, sys=0.80%, ctx=367, majf=0, minf=25345 00:21:09.993 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.1%, 16=16.2%, 32=32.3%, >=64=36.4% 00:21:09.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.993 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:09.993 issued rwts: total=99,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.993 00:21:09.993 Run status group 0 (all jobs): 00:21:09.993 READ: bw=4395MiB/s (4608MB/s), 3779KiB/s-300MiB/s (3870kB/s-315MB/s), io=46.5GiB (49.9GB), run=10012-10839msec 00:21:09.993 00:21:09.993 Disk stats (read/write): 00:21:09.993 nvme0n1: ios=48261/0, merge=0/0, ticks=6539752/0, in_queue=6539752, util=97.63% 00:21:09.993 nvme1n1: ios=65331/0, merge=0/0, ticks=7003915/0, in_queue=7003915, util=98.35% 00:21:09.993 nvme2n1: ios=56915/0, merge=0/0, ticks=6470236/0, in_queue=6470236, util=98.43% 00:21:09.993 nvme3n1: ios=66416/0, merge=0/0, ticks=6675060/0, in_queue=6675060, util=98.66% 00:21:09.993 nvme4n1: ios=44188/0, merge=0/0, ticks=6686361/0, in_queue=6686361, util=98.46% 00:21:09.993 nvme5n1: ios=98239/0, merge=0/0, ticks=6846564/0, in_queue=6846564, util=99.14% 00:21:09.993 15:03:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:21:09.993 15:03:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:21:09.993 15:03:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:09.993 15:03:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:21:10.564 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:10.564 15:03:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:11.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:11.947 15:03:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:21:11.947 15:03:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:11.947 15:03:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:11.947 15:03:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:21:11.947 15:03:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:11.947 15:03:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:21:12.205 15:03:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:12.205 15:03:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.205 15:03:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.205 15:03:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:12.205 15:03:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.205 15:03:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:12.206 15:03:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:13.655 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:13.655 15:03:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:14.624 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:14.624 15:03:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:16.008 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:16.008 15:03:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:17.391 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:17.391 rmmod nvme_rdma 00:21:17.391 rmmod nvme_fabrics 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 1874631 ']' 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 1874631 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # '[' -z 1874631 ']' 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # kill -0 1874631 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # uname 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1874631 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1874631' 00:21:17.391 killing process with pid 1874631 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # kill 1874631 00:21:17.391 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # wait 1874631 00:21:17.652 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:17.652 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:17.652 00:21:17.652 real 0m38.288s 00:21:17.652 user 2m16.214s 00:21:17.652 sys 0m18.314s 00:21:17.652 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:17.652 15:03:33 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:17.652 ************************************ 00:21:17.652 END TEST nvmf_srq_overwhelm 00:21:17.652 ************************************ 00:21:17.652 15:03:33 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:17.652 15:03:33 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:17.652 15:03:33 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:17.652 15:03:33 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.652 15:03:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:17.652 ************************************ 00:21:17.652 START TEST nvmf_shutdown 00:21:17.652 ************************************ 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:17.652 * Looking for test storage... 00:21:17.652 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:17.652 ************************************ 00:21:17.652 START TEST nvmf_shutdown_tc1 00:21:17.652 ************************************ 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:17.652 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:17.653 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.653 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.653 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.653 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:17.653 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:17.653 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.653 15:03:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:25.797 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:25.797 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:25.797 Found net devices under 0000:98:00.0: mlx_0_0 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.797 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:25.798 Found net devices under 0000:98:00.1: mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:25.798 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:25.798 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:25.798 altname enp152s0f0np0 00:21:25.798 altname ens817f0np0 00:21:25.798 inet 192.168.100.8/24 scope global mlx_0_0 00:21:25.798 valid_lft forever preferred_lft forever 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:25.798 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:25.798 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:25.798 altname enp152s0f1np1 00:21:25.798 altname ens817f1np1 00:21:25.798 inet 192.168.100.9/24 scope global mlx_0_1 00:21:25.798 valid_lft forever preferred_lft forever 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:25.798 192.168.100.9' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:25.798 192.168.100.9' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:25.798 192.168.100.9' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:25.798 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1885400 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1885400 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1885400 ']' 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.059 15:03:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.059 [2024-07-15 15:03:41.928016] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:26.059 [2024-07-15 15:03:41.928086] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.059 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.059 [2024-07-15 15:03:42.016428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.059 [2024-07-15 15:03:42.111747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.059 [2024-07-15 15:03:42.111811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.059 [2024-07-15 15:03:42.111820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.059 [2024-07-15 15:03:42.111827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.059 [2024-07-15 15:03:42.111834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.059 [2024-07-15 15:03:42.111974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.059 [2024-07-15 15:03:42.112140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.059 [2024-07-15 15:03:42.112304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:26.059 [2024-07-15 15:03:42.112333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.000 [2024-07-15 15:03:42.792861] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f316b0/0x1f35ba0) succeed. 00:21:27.000 [2024-07-15 15:03:42.807600] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f32cf0/0x1f77230) succeed. 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.000 15:03:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.000 Malloc1 00:21:27.000 [2024-07-15 15:03:43.026863] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:27.000 Malloc2 00:21:27.261 Malloc3 00:21:27.261 Malloc4 00:21:27.261 Malloc5 00:21:27.261 Malloc6 00:21:27.261 Malloc7 00:21:27.261 Malloc8 00:21:27.522 Malloc9 00:21:27.522 Malloc10 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1885679 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1885679 /var/tmp/bdevperf.sock 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1885679 ']' 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.522 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 [2024-07-15 15:03:43.491026] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:27.523 [2024-07-15 15:03:43.491126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.523 { 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme$subsystem", 00:21:27.523 "trtype": "$TEST_TRANSPORT", 00:21:27.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "$NVMF_PORT", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.523 "hdgst": ${hdgst:-false}, 00:21:27.523 "ddgst": ${ddgst:-false} 00:21:27.523 }, 00:21:27.523 "method": "bdev_nvme_attach_controller" 00:21:27.523 } 00:21:27.523 EOF 00:21:27.523 )") 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:27.523 15:03:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:27.523 "params": { 00:21:27.523 "name": "Nvme1", 00:21:27.523 "trtype": "rdma", 00:21:27.523 "traddr": "192.168.100.8", 00:21:27.523 "adrfam": "ipv4", 00:21:27.523 "trsvcid": "4420", 00:21:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.523 "hdgst": false, 00:21:27.523 "ddgst": false 00:21:27.523 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme2", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme3", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme4", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme5", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme6", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme7", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme8", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme9", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 },{ 00:21:27.524 "params": { 00:21:27.524 "name": "Nvme10", 00:21:27.524 "trtype": "rdma", 00:21:27.524 "traddr": "192.168.100.8", 00:21:27.524 "adrfam": "ipv4", 00:21:27.524 "trsvcid": "4420", 00:21:27.524 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:27.524 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:27.524 "hdgst": false, 00:21:27.524 "ddgst": false 00:21:27.524 }, 00:21:27.524 "method": "bdev_nvme_attach_controller" 00:21:27.524 }' 00:21:27.524 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.524 [2024-07-15 15:03:43.561448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.784 [2024-07-15 15:03:43.626697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1885679 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:28.723 15:03:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:29.661 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1885679 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:29.661 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1885400 00:21:29.661 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:29.661 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.661 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:29.661 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:29.661 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.661 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.661 { 00:21:29.661 "params": { 00:21:29.661 "name": "Nvme$subsystem", 00:21:29.661 "trtype": "$TEST_TRANSPORT", 00:21:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.661 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 [2024-07-15 15:03:45.581537] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:29.662 [2024-07-15 15:03:45.581590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886227 ] 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.662 { 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme$subsystem", 00:21:29.662 "trtype": "$TEST_TRANSPORT", 00:21:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "$NVMF_PORT", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.662 "hdgst": ${hdgst:-false}, 00:21:29.662 "ddgst": ${ddgst:-false} 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 } 00:21:29.662 EOF 00:21:29.662 )") 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.662 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:29.662 15:03:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme1", 00:21:29.662 "trtype": "rdma", 00:21:29.662 "traddr": "192.168.100.8", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "4420", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.662 "hdgst": false, 00:21:29.662 "ddgst": false 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 },{ 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme2", 00:21:29.662 "trtype": "rdma", 00:21:29.662 "traddr": "192.168.100.8", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "4420", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.662 "hdgst": false, 00:21:29.662 "ddgst": false 00:21:29.662 }, 00:21:29.662 "method": "bdev_nvme_attach_controller" 00:21:29.662 },{ 00:21:29.662 "params": { 00:21:29.662 "name": "Nvme3", 00:21:29.662 "trtype": "rdma", 00:21:29.662 "traddr": "192.168.100.8", 00:21:29.662 "adrfam": "ipv4", 00:21:29.662 "trsvcid": "4420", 00:21:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.662 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.662 "hdgst": false, 00:21:29.662 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 },{ 00:21:29.663 "params": { 00:21:29.663 "name": "Nvme4", 00:21:29.663 "trtype": "rdma", 00:21:29.663 "traddr": "192.168.100.8", 00:21:29.663 "adrfam": "ipv4", 00:21:29.663 "trsvcid": "4420", 00:21:29.663 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.663 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.663 "hdgst": false, 00:21:29.663 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 },{ 00:21:29.663 "params": { 00:21:29.663 "name": "Nvme5", 00:21:29.663 "trtype": "rdma", 00:21:29.663 "traddr": "192.168.100.8", 00:21:29.663 "adrfam": "ipv4", 00:21:29.663 "trsvcid": "4420", 00:21:29.663 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.663 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.663 "hdgst": false, 00:21:29.663 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 },{ 00:21:29.663 "params": { 00:21:29.663 "name": "Nvme6", 00:21:29.663 "trtype": "rdma", 00:21:29.663 "traddr": "192.168.100.8", 00:21:29.663 "adrfam": "ipv4", 00:21:29.663 "trsvcid": "4420", 00:21:29.663 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.663 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.663 "hdgst": false, 00:21:29.663 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 },{ 00:21:29.663 "params": { 00:21:29.663 "name": "Nvme7", 00:21:29.663 "trtype": "rdma", 00:21:29.663 "traddr": "192.168.100.8", 00:21:29.663 "adrfam": "ipv4", 00:21:29.663 "trsvcid": "4420", 00:21:29.663 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.663 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.663 "hdgst": false, 00:21:29.663 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 },{ 00:21:29.663 "params": { 00:21:29.663 "name": "Nvme8", 00:21:29.663 "trtype": "rdma", 00:21:29.663 "traddr": "192.168.100.8", 00:21:29.663 "adrfam": "ipv4", 00:21:29.663 "trsvcid": "4420", 00:21:29.663 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.663 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.663 "hdgst": false, 00:21:29.663 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 },{ 00:21:29.663 "params": { 00:21:29.663 "name": "Nvme9", 00:21:29.663 "trtype": "rdma", 00:21:29.663 "traddr": "192.168.100.8", 00:21:29.663 "adrfam": "ipv4", 00:21:29.663 "trsvcid": "4420", 00:21:29.663 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.663 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.663 "hdgst": false, 00:21:29.663 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 },{ 00:21:29.663 "params": { 00:21:29.663 "name": "Nvme10", 00:21:29.663 "trtype": "rdma", 00:21:29.663 "traddr": "192.168.100.8", 00:21:29.663 "adrfam": "ipv4", 00:21:29.663 "trsvcid": "4420", 00:21:29.663 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.663 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.663 "hdgst": false, 00:21:29.663 "ddgst": false 00:21:29.663 }, 00:21:29.663 "method": "bdev_nvme_attach_controller" 00:21:29.663 }' 00:21:29.663 [2024-07-15 15:03:45.647367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.663 [2024-07-15 15:03:45.711726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.601 Running I/O for 1 seconds... 00:21:31.983 00:21:31.983 Latency(us) 00:21:31.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.983 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.983 Nvme1n1 : 1.20 285.97 17.87 0.00 0.00 217063.87 14199.47 222822.40 00:21:31.983 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.983 Nvme2n1 : 1.20 279.05 17.44 0.00 0.00 218028.78 21517.65 209715.20 00:21:31.983 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.983 Nvme3n1 : 1.22 315.41 19.71 0.00 0.00 193533.30 11414.19 200103.25 00:21:31.983 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.983 Nvme4n1 : 1.22 327.43 20.46 0.00 0.00 182749.70 3686.40 171267.41 00:21:31.983 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.983 Nvme5n1 : 1.22 314.80 19.68 0.00 0.00 186981.69 11741.87 177384.11 00:21:31.983 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.983 Nvme6n1 : 1.21 317.61 19.85 0.00 0.00 183301.97 14964.05 148548.27 00:21:31.983 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.983 Nvme7n1 : 1.21 317.10 19.82 0.00 0.00 180365.94 15947.09 129324.37 00:21:31.983 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.983 Verification LBA range: start 0x0 length 0x400 00:21:31.984 Nvme8n1 : 1.21 316.60 19.79 0.00 0.00 177437.87 16930.13 119712.43 00:21:31.984 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.984 Verification LBA range: start 0x0 length 0x400 00:21:31.984 Nvme9n1 : 1.22 314.50 19.66 0.00 0.00 174758.40 6144.00 173888.85 00:21:31.984 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.984 Verification LBA range: start 0x0 length 0x400 00:21:31.984 Nvme10n1 : 1.22 210.55 13.16 0.00 0.00 257018.88 13271.04 421178.03 00:21:31.984 =================================================================================================================== 00:21:31.984 Total : 2999.02 187.44 0.00 0.00 194463.09 3686.40 421178.03 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:32.246 rmmod nvme_rdma 00:21:32.246 rmmod nvme_fabrics 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1885400 ']' 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1885400 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1885400 ']' 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1885400 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1885400 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1885400' 00:21:32.246 killing process with pid 1885400 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1885400 00:21:32.246 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1885400 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:32.506 00:21:32.506 real 0m14.831s 00:21:32.506 user 0m31.038s 00:21:32.506 sys 0m6.926s 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:32.506 ************************************ 00:21:32.506 END TEST nvmf_shutdown_tc1 00:21:32.506 ************************************ 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:32.506 15:03:48 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 ************************************ 00:21:32.768 START TEST nvmf_shutdown_tc2 00:21:32.768 ************************************ 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.768 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:32.769 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:32.769 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:32.769 Found net devices under 0000:98:00.0: mlx_0_0 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:32.769 Found net devices under 0000:98:00.1: mlx_0_1 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:32.769 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.769 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:32.769 altname enp152s0f0np0 00:21:32.769 altname ens817f0np0 00:21:32.769 inet 192.168.100.8/24 scope global mlx_0_0 00:21:32.769 valid_lft forever preferred_lft forever 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.769 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:32.770 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.770 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:32.770 altname enp152s0f1np1 00:21:32.770 altname ens817f1np1 00:21:32.770 inet 192.168.100.9/24 scope global mlx_0_1 00:21:32.770 valid_lft forever preferred_lft forever 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:32.770 192.168.100.9' 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:32.770 192.168.100.9' 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:32.770 192.168.100.9' 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:21:32.770 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1886923 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1886923 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1886923 ']' 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.030 15:03:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.030 [2024-07-15 15:03:48.919085] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:33.030 [2024-07-15 15:03:48.919147] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.030 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.030 [2024-07-15 15:03:49.002781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.030 [2024-07-15 15:03:49.060266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.030 [2024-07-15 15:03:49.060299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.030 [2024-07-15 15:03:49.060305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.030 [2024-07-15 15:03:49.060310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.030 [2024-07-15 15:03:49.060314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.030 [2024-07-15 15:03:49.060429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.030 [2024-07-15 15:03:49.060584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.030 [2024-07-15 15:03:49.060739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.030 [2024-07-15 15:03:49.060741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.970 [2024-07-15 15:03:49.776967] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x197a6b0/0x197eba0) succeed. 00:21:33.970 [2024-07-15 15:03:49.786612] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x197bcf0/0x19c0230) succeed. 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.970 15:03:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.970 Malloc1 00:21:33.970 [2024-07-15 15:03:49.977670] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:33.970 Malloc2 00:21:34.231 Malloc3 00:21:34.231 Malloc4 00:21:34.231 Malloc5 00:21:34.231 Malloc6 00:21:34.231 Malloc7 00:21:34.231 Malloc8 00:21:34.231 Malloc9 00:21:34.494 Malloc10 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1887190 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1887190 /var/tmp/bdevperf.sock 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1887190 ']' 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.494 { 00:21:34.494 "params": { 00:21:34.494 "name": "Nvme$subsystem", 00:21:34.494 "trtype": "$TEST_TRANSPORT", 00:21:34.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "$NVMF_PORT", 00:21:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.494 "hdgst": ${hdgst:-false}, 00:21:34.494 "ddgst": ${ddgst:-false} 00:21:34.494 }, 00:21:34.494 "method": "bdev_nvme_attach_controller" 00:21:34.494 } 00:21:34.494 EOF 00:21:34.494 )") 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.494 { 00:21:34.494 "params": { 00:21:34.494 "name": "Nvme$subsystem", 00:21:34.494 "trtype": "$TEST_TRANSPORT", 00:21:34.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "$NVMF_PORT", 00:21:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.494 "hdgst": ${hdgst:-false}, 00:21:34.494 "ddgst": ${ddgst:-false} 00:21:34.494 }, 00:21:34.494 "method": "bdev_nvme_attach_controller" 00:21:34.494 } 00:21:34.494 EOF 00:21:34.494 )") 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.494 { 00:21:34.494 "params": { 00:21:34.494 "name": "Nvme$subsystem", 00:21:34.494 "trtype": "$TEST_TRANSPORT", 00:21:34.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "$NVMF_PORT", 00:21:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.494 "hdgst": ${hdgst:-false}, 00:21:34.494 "ddgst": ${ddgst:-false} 00:21:34.494 }, 00:21:34.494 "method": "bdev_nvme_attach_controller" 00:21:34.494 } 00:21:34.494 EOF 00:21:34.494 )") 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.494 { 00:21:34.494 "params": { 00:21:34.494 "name": "Nvme$subsystem", 00:21:34.494 "trtype": "$TEST_TRANSPORT", 00:21:34.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "$NVMF_PORT", 00:21:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.494 "hdgst": ${hdgst:-false}, 00:21:34.494 "ddgst": ${ddgst:-false} 00:21:34.494 }, 00:21:34.494 "method": "bdev_nvme_attach_controller" 00:21:34.494 } 00:21:34.494 EOF 00:21:34.494 )") 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.494 { 00:21:34.494 "params": { 00:21:34.494 "name": "Nvme$subsystem", 00:21:34.494 "trtype": "$TEST_TRANSPORT", 00:21:34.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "$NVMF_PORT", 00:21:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.494 "hdgst": ${hdgst:-false}, 00:21:34.494 "ddgst": ${ddgst:-false} 00:21:34.494 }, 00:21:34.494 "method": "bdev_nvme_attach_controller" 00:21:34.494 } 00:21:34.494 EOF 00:21:34.494 )") 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.494 { 00:21:34.494 "params": { 00:21:34.494 "name": "Nvme$subsystem", 00:21:34.494 "trtype": "$TEST_TRANSPORT", 00:21:34.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "$NVMF_PORT", 00:21:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.494 "hdgst": ${hdgst:-false}, 00:21:34.494 "ddgst": ${ddgst:-false} 00:21:34.494 }, 00:21:34.494 "method": "bdev_nvme_attach_controller" 00:21:34.494 } 00:21:34.494 EOF 00:21:34.494 )") 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.494 [2024-07-15 15:03:50.424489] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:34.494 [2024-07-15 15:03:50.424544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887190 ] 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.494 { 00:21:34.494 "params": { 00:21:34.494 "name": "Nvme$subsystem", 00:21:34.494 "trtype": "$TEST_TRANSPORT", 00:21:34.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.494 "adrfam": "ipv4", 00:21:34.494 "trsvcid": "$NVMF_PORT", 00:21:34.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.494 "hdgst": ${hdgst:-false}, 00:21:34.494 "ddgst": ${ddgst:-false} 00:21:34.494 }, 00:21:34.494 "method": "bdev_nvme_attach_controller" 00:21:34.494 } 00:21:34.494 EOF 00:21:34.494 )") 00:21:34.494 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.495 { 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme$subsystem", 00:21:34.495 "trtype": "$TEST_TRANSPORT", 00:21:34.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "$NVMF_PORT", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.495 "hdgst": ${hdgst:-false}, 00:21:34.495 "ddgst": ${ddgst:-false} 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 } 00:21:34.495 EOF 00:21:34.495 )") 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.495 { 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme$subsystem", 00:21:34.495 "trtype": "$TEST_TRANSPORT", 00:21:34.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "$NVMF_PORT", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.495 "hdgst": ${hdgst:-false}, 00:21:34.495 "ddgst": ${ddgst:-false} 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 } 00:21:34.495 EOF 00:21:34.495 )") 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.495 { 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme$subsystem", 00:21:34.495 "trtype": "$TEST_TRANSPORT", 00:21:34.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "$NVMF_PORT", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.495 "hdgst": ${hdgst:-false}, 00:21:34.495 "ddgst": ${ddgst:-false} 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 } 00:21:34.495 EOF 00:21:34.495 )") 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:34.495 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:34.495 15:03:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme1", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme2", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme3", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme4", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme5", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme6", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme7", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme8", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme9", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 },{ 00:21:34.495 "params": { 00:21:34.495 "name": "Nvme10", 00:21:34.495 "trtype": "rdma", 00:21:34.495 "traddr": "192.168.100.8", 00:21:34.495 "adrfam": "ipv4", 00:21:34.495 "trsvcid": "4420", 00:21:34.495 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.495 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.495 "hdgst": false, 00:21:34.495 "ddgst": false 00:21:34.495 }, 00:21:34.495 "method": "bdev_nvme_attach_controller" 00:21:34.495 }' 00:21:34.495 [2024-07-15 15:03:50.491334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.757 [2024-07-15 15:03:50.556350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.698 Running I/O for 10 seconds... 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.698 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.959 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.959 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:35.959 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:35.959 15:03:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:36.220 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:36.220 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:36.220 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.220 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.220 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.220 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=150 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 150 -ge 100 ']' 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1887190 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1887190 ']' 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1887190 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887190 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887190' 00:21:36.482 killing process with pid 1887190 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1887190 00:21:36.482 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1887190 00:21:36.482 Received shutdown signal, test time was about 1.032957 seconds 00:21:36.482 00:21:36.482 Latency(us) 00:21:36.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.482 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme1n1 : 1.02 270.57 16.91 0.00 0.00 231782.86 8956.59 248162.99 00:21:36.482 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme2n1 : 1.02 276.11 17.26 0.00 0.00 222874.20 9229.65 235929.60 00:21:36.482 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme3n1 : 1.02 313.98 19.62 0.00 0.00 192347.61 4423.68 180879.36 00:21:36.482 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme4n1 : 1.02 313.53 19.60 0.00 0.00 188765.01 10158.08 172141.23 00:21:36.482 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme5n1 : 1.02 312.95 19.56 0.00 0.00 186194.01 11031.89 162529.28 00:21:36.482 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme6n1 : 1.02 312.36 19.52 0.00 0.00 182651.05 12069.55 144179.20 00:21:36.482 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme7n1 : 1.03 311.85 19.49 0.00 0.00 178333.44 12779.52 130198.19 00:21:36.482 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme8n1 : 1.03 311.28 19.45 0.00 0.00 175304.02 13707.95 116217.17 00:21:36.482 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme9n1 : 1.03 310.69 19.42 0.00 0.00 171782.83 14745.60 131945.81 00:21:36.482 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.482 Verification LBA range: start 0x0 length 0x400 00:21:36.482 Nvme10n1 : 1.03 248.09 15.51 0.00 0.00 210074.24 9775.79 253405.87 00:21:36.482 =================================================================================================================== 00:21:36.482 Total : 2981.41 186.34 0.00 0.00 192748.33 4423.68 253405.87 00:21:36.743 15:03:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:38.125 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1886923 00:21:38.125 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:38.125 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:38.125 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:38.125 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:38.126 rmmod nvme_rdma 00:21:38.126 rmmod nvme_fabrics 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1886923 ']' 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1886923 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1886923 ']' 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1886923 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1886923 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1886923' 00:21:38.126 killing process with pid 1886923 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1886923 00:21:38.126 15:03:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1886923 00:21:38.126 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.126 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:38.126 00:21:38.126 real 0m5.564s 00:21:38.126 user 0m22.610s 00:21:38.126 sys 0m1.007s 00:21:38.126 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:38.126 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.126 ************************************ 00:21:38.126 END TEST nvmf_shutdown_tc2 00:21:38.126 ************************************ 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:38.389 ************************************ 00:21:38.389 START TEST nvmf_shutdown_tc3 00:21:38.389 ************************************ 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:38.389 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:38.389 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:38.390 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:38.390 Found net devices under 0000:98:00.0: mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:38.390 Found net devices under 0000:98:00.1: mlx_0_1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:38.390 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:38.390 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:38.390 altname enp152s0f0np0 00:21:38.390 altname ens817f0np0 00:21:38.390 inet 192.168.100.8/24 scope global mlx_0_0 00:21:38.390 valid_lft forever preferred_lft forever 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:38.390 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:38.390 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:38.390 altname enp152s0f1np1 00:21:38.390 altname ens817f1np1 00:21:38.390 inet 192.168.100.9/24 scope global mlx_0_1 00:21:38.390 valid_lft forever preferred_lft forever 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:38.390 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:38.391 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:38.391 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:38.391 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:38.651 192.168.100.9' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:38.651 192.168.100.9' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:38.651 192.168.100.9' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1888260 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1888260 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1888260 ']' 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.651 15:03:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.651 [2024-07-15 15:03:54.555827] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:38.651 [2024-07-15 15:03:54.555893] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.651 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.651 [2024-07-15 15:03:54.642244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.651 [2024-07-15 15:03:54.703056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.651 [2024-07-15 15:03:54.703090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.651 [2024-07-15 15:03:54.703095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.651 [2024-07-15 15:03:54.703100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.651 [2024-07-15 15:03:54.703104] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.651 [2024-07-15 15:03:54.703216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.651 [2024-07-15 15:03:54.703374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.651 [2024-07-15 15:03:54.703606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.651 [2024-07-15 15:03:54.703606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.592 [2024-07-15 15:03:55.419930] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c886b0/0x1c8cba0) succeed. 00:21:39.592 [2024-07-15 15:03:55.430891] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c89cf0/0x1cce230) succeed. 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.592 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.592 Malloc1 00:21:39.592 [2024-07-15 15:03:55.624927] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:39.592 Malloc2 00:21:39.853 Malloc3 00:21:39.853 Malloc4 00:21:39.853 Malloc5 00:21:39.853 Malloc6 00:21:39.853 Malloc7 00:21:39.853 Malloc8 00:21:40.114 Malloc9 00:21:40.114 Malloc10 00:21:40.114 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.114 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:40.114 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.114 15:03:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1888565 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1888565 /var/tmp/bdevperf.sock 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1888565 ']' 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.114 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.114 { 00:21:40.114 "params": { 00:21:40.114 "name": "Nvme$subsystem", 00:21:40.114 "trtype": "$TEST_TRANSPORT", 00:21:40.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.114 "adrfam": "ipv4", 00:21:40.114 "trsvcid": "$NVMF_PORT", 00:21:40.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 [2024-07-15 15:03:56.073931] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:40.115 [2024-07-15 15:03:56.073984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888565 ] 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.115 { 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme$subsystem", 00:21:40.115 "trtype": "$TEST_TRANSPORT", 00:21:40.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "$NVMF_PORT", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.115 "hdgst": ${hdgst:-false}, 00:21:40.115 "ddgst": ${ddgst:-false} 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 } 00:21:40.115 EOF 00:21:40.115 )") 00:21:40.115 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:40.115 15:03:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme1", 00:21:40.115 "trtype": "rdma", 00:21:40.115 "traddr": "192.168.100.8", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "4420", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.115 "hdgst": false, 00:21:40.115 "ddgst": false 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 },{ 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme2", 00:21:40.115 "trtype": "rdma", 00:21:40.115 "traddr": "192.168.100.8", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "4420", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:40.115 "hdgst": false, 00:21:40.115 "ddgst": false 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 },{ 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme3", 00:21:40.115 "trtype": "rdma", 00:21:40.115 "traddr": "192.168.100.8", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "4420", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:40.115 "hdgst": false, 00:21:40.115 "ddgst": false 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 },{ 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme4", 00:21:40.115 "trtype": "rdma", 00:21:40.115 "traddr": "192.168.100.8", 00:21:40.115 "adrfam": "ipv4", 00:21:40.115 "trsvcid": "4420", 00:21:40.115 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:40.115 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:40.115 "hdgst": false, 00:21:40.115 "ddgst": false 00:21:40.115 }, 00:21:40.115 "method": "bdev_nvme_attach_controller" 00:21:40.115 },{ 00:21:40.115 "params": { 00:21:40.115 "name": "Nvme5", 00:21:40.116 "trtype": "rdma", 00:21:40.116 "traddr": "192.168.100.8", 00:21:40.116 "adrfam": "ipv4", 00:21:40.116 "trsvcid": "4420", 00:21:40.116 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:40.116 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:40.116 "hdgst": false, 00:21:40.116 "ddgst": false 00:21:40.116 }, 00:21:40.116 "method": "bdev_nvme_attach_controller" 00:21:40.116 },{ 00:21:40.116 "params": { 00:21:40.116 "name": "Nvme6", 00:21:40.116 "trtype": "rdma", 00:21:40.116 "traddr": "192.168.100.8", 00:21:40.116 "adrfam": "ipv4", 00:21:40.116 "trsvcid": "4420", 00:21:40.116 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:40.116 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:40.116 "hdgst": false, 00:21:40.116 "ddgst": false 00:21:40.116 }, 00:21:40.116 "method": "bdev_nvme_attach_controller" 00:21:40.116 },{ 00:21:40.116 "params": { 00:21:40.116 "name": "Nvme7", 00:21:40.116 "trtype": "rdma", 00:21:40.116 "traddr": "192.168.100.8", 00:21:40.116 "adrfam": "ipv4", 00:21:40.116 "trsvcid": "4420", 00:21:40.116 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:40.116 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:40.116 "hdgst": false, 00:21:40.116 "ddgst": false 00:21:40.116 }, 00:21:40.116 "method": "bdev_nvme_attach_controller" 00:21:40.116 },{ 00:21:40.116 "params": { 00:21:40.116 "name": "Nvme8", 00:21:40.116 "trtype": "rdma", 00:21:40.116 "traddr": "192.168.100.8", 00:21:40.116 "adrfam": "ipv4", 00:21:40.116 "trsvcid": "4420", 00:21:40.116 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:40.116 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:40.116 "hdgst": false, 00:21:40.116 "ddgst": false 00:21:40.116 }, 00:21:40.116 "method": "bdev_nvme_attach_controller" 00:21:40.116 },{ 00:21:40.116 "params": { 00:21:40.116 "name": "Nvme9", 00:21:40.116 "trtype": "rdma", 00:21:40.116 "traddr": "192.168.100.8", 00:21:40.116 "adrfam": "ipv4", 00:21:40.116 "trsvcid": "4420", 00:21:40.116 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:40.116 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:40.116 "hdgst": false, 00:21:40.116 "ddgst": false 00:21:40.116 }, 00:21:40.116 "method": "bdev_nvme_attach_controller" 00:21:40.116 },{ 00:21:40.116 "params": { 00:21:40.116 "name": "Nvme10", 00:21:40.116 "trtype": "rdma", 00:21:40.116 "traddr": "192.168.100.8", 00:21:40.116 "adrfam": "ipv4", 00:21:40.116 "trsvcid": "4420", 00:21:40.116 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:40.116 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:40.116 "hdgst": false, 00:21:40.116 "ddgst": false 00:21:40.116 }, 00:21:40.116 "method": "bdev_nvme_attach_controller" 00:21:40.116 }' 00:21:40.116 [2024-07-15 15:03:56.140816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.376 [2024-07-15 15:03:56.205901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.317 Running I/O for 10 seconds... 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.317 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.578 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.578 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=4 00:21:41.578 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 4 -ge 100 ']' 00:21:41.578 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=132 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 132 -ge 100 ']' 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1888260 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1888260 ']' 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1888260 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1888260 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1888260' 00:21:41.839 killing process with pid 1888260 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1888260 00:21:41.839 15:03:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1888260 00:21:42.411 15:03:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:42.411 15:03:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:42.995 [2024-07-15 15:03:58.908375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.908421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.908432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.908440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.908448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.908455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.908463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.908476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.911067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.995 [2024-07-15 15:03:58.911082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:42.995 [2024-07-15 15:03:58.911139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.911148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.911157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.911164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.911172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.911180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.911188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.911195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.995 [2024-07-15 15:03:58.913916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.995 [2024-07-15 15:03:58.913949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:42.995 [2024-07-15 15:03:58.913989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.914012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.914036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.914057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.914080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.914100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.914123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.914143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.916443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.995 [2024-07-15 15:03:58.916453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:42.995 [2024-07-15 15:03:58.916466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.916474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.916482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.916492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.916500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.916507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.916514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.916521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.919262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.995 [2024-07-15 15:03:58.919294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:42.995 [2024-07-15 15:03:58.919332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.919354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.919378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.919399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.919422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.919442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.919465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.919486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.921968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.995 [2024-07-15 15:03:58.921999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.995 [2024-07-15 15:03:58.922038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.922059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.922083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.922104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.922127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.922147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.995 [2024-07-15 15:03:58.922170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.995 [2024-07-15 15:03:58.922190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:0 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.996 [2024-07-15 15:03:58.925034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.996 [2024-07-15 15:03:58.925072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.996 [2024-07-15 15:03:58.928068] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019272880 was disconnected and freed. reset controller. 00:21:42.996 [2024-07-15 15:03:58.928104] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.930531] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019272600 was disconnected and freed. reset controller. 00:21:42.996 [2024-07-15 15:03:58.930543] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.933368] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019272380 was disconnected and freed. reset controller. 00:21:42.996 [2024-07-15 15:03:58.933400] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.936318] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019272100 was disconnected and freed. reset controller. 00:21:42.996 [2024-07-15 15:03:58.936350] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.936574] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.936699] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.936732] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.936763] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.936794] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.936823] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.937012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.996 [2024-07-15 15:03:58.937046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:42.996 [2024-07-15 15:03:58.937072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:42.996 [2024-07-15 15:03:58.937097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:42.996 [2024-07-15 15:03:58.945433] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:42.996 [2024-07-15 15:03:58.945452] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:42.996 [2024-07-15 15:03:58.945458] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:42.996 [2024-07-15 15:03:58.945711] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:42.996 [2024-07-15 15:03:58.945720] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:42.996 [2024-07-15 15:03:58.945726] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:21:42.996 [2024-07-15 15:03:58.945974] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:42.996 [2024-07-15 15:03:58.945983] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:42.996 [2024-07-15 15:03:58.945989] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:21:42.996 [2024-07-15 15:03:58.946218] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:42.996 [2024-07-15 15:03:58.946234] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:42.996 [2024-07-15 15:03:58.946240] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d1100 00:21:42.996 [2024-07-15 15:03:58.946508] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.946562] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.956559] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.956613] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.966611] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.966664] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.976664] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.976717] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.996 [2024-07-15 15:03:58.984439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6d6000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.984991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.984999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.985011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d47c000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.985018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.996 [2024-07-15 15:03:58.985031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45b000 len:0x10000 key:0x184300 00:21:42.996 [2024-07-15 15:03:58.985038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d43a000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d419000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3f8000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3d7000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3b6000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d395000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d374000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d353000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d332000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d311000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f0000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68c000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d66b000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d64a000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d629000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d608000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e6000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87b000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d85a000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d839000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.985755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d818000 len:0x10000 key:0x184300 00:21:42.997 [2024-07-15 15:03:58.985763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.988624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:42.997 [2024-07-15 15:03:58.988664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.988673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:23401 cdw0:fd265c70 sqhd:54db p:1 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.988681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.988688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:23401 cdw0:fd265c70 sqhd:54db p:1 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.988696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.988703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:23401 cdw0:fd265c70 sqhd:54db p:1 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.988711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.988718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:23401 cdw0:fd265c70 sqhd:54db p:1 m:0 dnr:0 00:21:42.997 [2024-07-15 15:03:58.991445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.997 [2024-07-15 15:03:58.991456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:42.997 [2024-07-15 15:03:58.991471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.991479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.997 [2024-07-15 15:03:58.991487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.991495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.997 [2024-07-15 15:03:58.991503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.991510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.997 [2024-07-15 15:03:58.991518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.991525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.997 [2024-07-15 15:03:58.993450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.997 [2024-07-15 15:03:58.993461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:42.997 [2024-07-15 15:03:58.993474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.997 [2024-07-15 15:03:58.993482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.993490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.998 [2024-07-15 15:03:58.993497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.993507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.998 [2024-07-15 15:03:58.993514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.993522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.998 [2024-07-15 15:03:58.993529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.995956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.998 [2024-07-15 15:03:58.995967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:42.998 [2024-07-15 15:03:58.995982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.998 [2024-07-15 15:03:58.995989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.995997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.998 [2024-07-15 15:03:58.996005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.996012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.998 [2024-07-15 15:03:58.996019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.996026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.998 [2024-07-15 15:03:58.996033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26209 cdw0:fd265c70 sqhd:bf00 p:1 m:1 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.998 [2024-07-15 15:03:58.998449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:42.998 [2024-07-15 15:03:58.998461] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.998 [2024-07-15 15:03:58.998537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aafc00 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a8fb00 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a7fa80 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a4f900 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a3f880 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a2f800 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182e00 00:21:42.998 [2024-07-15 15:03:58.998746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019df0000 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.998991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182f00 00:21:42.998 [2024-07-15 15:03:58.998999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010fa7000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f86000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f65000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f44000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f23000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f02000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ee1000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ec0000 len:0x10000 key:0x184300 00:21:42.998 [2024-07-15 15:03:58.999158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.998 [2024-07-15 15:03:58.999170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c147000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c105000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0e4000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0c3000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0a2000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7f7000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7d6000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d7b5000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d794000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d773000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d752000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d731000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d710000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dacd000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daac000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da8b000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6a000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da49000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da28000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd1f000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcfe000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcdd000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcbc000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9b000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc7a000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc59000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:58.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc38000 len:0x10000 key:0x184300 00:21:42.999 [2024-07-15 15:03:58.999821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0cfd00 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0bfc80 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a08fb00 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a07fa80 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a06fa00 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:42.999 [2024-07-15 15:03:59.002915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183100 00:21:42.999 [2024-07-15 15:03:59.002923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.002935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a03f880 len:0x10000 key:0x183100 00:21:43.000 [2024-07-15 15:03:59.002943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.002954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183100 00:21:43.000 [2024-07-15 15:03:59.002962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.002973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183100 00:21:43.000 [2024-07-15 15:03:59.002981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.002992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183100 00:21:43.000 [2024-07-15 15:03:59.003000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x183000 00:21:43.000 [2024-07-15 15:03:59.003193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111b7000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011196000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011175000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011154000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011133000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011112000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110f1000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110d0000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c357000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010725000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010746000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb34000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb13000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf2000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad1000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c4d000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c2c000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c0b000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bea000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.000 [2024-07-15 15:03:59.003837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bc9000 len:0x10000 key:0x184300 00:21:43.000 [2024-07-15 15:03:59.003844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b66000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.003864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b45000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.003883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b24000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.003903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b03000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.003923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.003942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.003962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.003982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.003995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e9f000 len:0x10000 key:0x184300 00:21:43.001 [2024-07-15 15:03:59.004003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007212] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ec00 was disconnected and freed. reset controller. 00:21:43.001 [2024-07-15 15:03:59.007224] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.001 [2024-07-15 15:03:59.007239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7f0000 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a79fd80 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x184200 00:21:43.001 [2024-07-15 15:03:59.007829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183900 00:21:43.001 [2024-07-15 15:03:59.007848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183900 00:21:43.001 [2024-07-15 15:03:59.007869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183900 00:21:43.001 [2024-07-15 15:03:59.007888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183900 00:21:43.001 [2024-07-15 15:03:59.007907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183900 00:21:43.001 [2024-07-15 15:03:59.007926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.001 [2024-07-15 15:03:59.007938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.007945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.007957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.007964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.007976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.007983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.007996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183900 00:21:43.002 [2024-07-15 15:03:59.008357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a46fa00 len:0x10000 key:0x183400 00:21:43.002 [2024-07-15 15:03:59.008377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:21:43.002 [2024-07-15 15:03:59.008396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:21:43.002 [2024-07-15 15:03:59.008416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f939000 len:0x10000 key:0x184300 00:21:43.002 [2024-07-15 15:03:59.008436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184300 00:21:43.002 [2024-07-15 15:03:59.008455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95a000 len:0x10000 key:0x184300 00:21:43.002 [2024-07-15 15:03:59.008475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.008488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97b000 len:0x10000 key:0x184300 00:21:43.002 [2024-07-15 15:03:59.008495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011638] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e980 was disconnected and freed. reset controller. 00:21:43.002 [2024-07-15 15:03:59.011650] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.002 [2024-07-15 15:03:59.011661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aacfd00 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aabfc80 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa8fb00 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa7fa80 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.002 [2024-07-15 15:03:59.011860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183e00 00:21:43.002 [2024-07-15 15:03:59.011867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.011879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183e00 00:21:43.003 [2024-07-15 15:03:59.011886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.011898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183e00 00:21:43.003 [2024-07-15 15:03:59.011907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.011919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183e00 00:21:43.003 [2024-07-15 15:03:59.011926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.011938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183900 00:21:43.003 [2024-07-15 15:03:59.011945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.011957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183900 00:21:43.003 [2024-07-15 15:03:59.011964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.011976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x183900 00:21:43.003 [2024-07-15 15:03:59.011984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.011995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183900 00:21:43.003 [2024-07-15 15:03:59.012003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183900 00:21:43.003 [2024-07-15 15:03:59.012022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183800 00:21:43.003 [2024-07-15 15:03:59.012628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183d00 00:21:43.003 [2024-07-15 15:03:59.012647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183d00 00:21:43.003 [2024-07-15 15:03:59.012666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.003 [2024-07-15 15:03:59.012678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183d00 00:21:43.003 [2024-07-15 15:03:59.012685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.012838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.012850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.018213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.018266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.018276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.018289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183e00 00:21:43.004 [2024-07-15 15:03:59.018296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.021996] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e700 was disconnected and freed. reset controller. 00:21:43.004 [2024-07-15 15:03:59.022064] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.004 [2024-07-15 15:03:59.022112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183d00 00:21:43.004 [2024-07-15 15:03:59.022139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.022985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.022997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.023004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.023016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.023023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183300 00:21:43.004 [2024-07-15 15:03:59.023042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.004 [2024-07-15 15:03:59.023054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183300 00:21:43.005 [2024-07-15 15:03:59.023062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fc0000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013086000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000130a7000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010050000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010071000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010092000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100b3000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100d4000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100f5000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010116000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010137000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114cf000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114ae000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001148d000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001146c000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001144b000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001142a000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011409000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113e8000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113c7000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113a6000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011385000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011364000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b5000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f894000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f873000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.023704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f852000 len:0x10000 key:0x184300 00:21:43.005 [2024-07-15 15:03:59.023711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:efed9000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.005 [2024-07-15 15:03:59.045493] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e480 was disconnected and freed. reset controller. 00:21:43.005 [2024-07-15 15:03:59.045514] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.005 [2024-07-15 15:03:59.045564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:43.005 [2024-07-15 15:03:59.045629] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.005 [2024-07-15 15:03:59.045645] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.005 [2024-07-15 15:03:59.045656] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.005 [2024-07-15 15:03:59.045666] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.266 task offset: 8192 on job bdev=Nvme10n1 fails 00:21:43.266 00:21:43.266 Latency(us) 00:21:43.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.266 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.266 Job: Nvme1n1 ended in about 1.95 seconds with error 00:21:43.266 Verification LBA range: start 0x0 length 0x400 00:21:43.266 Nvme1n1 : 1.95 110.91 6.93 32.86 0.00 441790.90 4560.21 1048576.00 00:21:43.266 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.266 Job: Nvme2n1 ended in about 1.95 seconds with error 00:21:43.266 Verification LBA range: start 0x0 length 0x400 00:21:43.266 Nvme2n1 : 1.95 106.74 6.67 32.84 0.00 450624.25 19879.25 1048576.00 00:21:43.267 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme3n1 ended in about 1.95 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme3n1 : 1.95 114.88 7.18 32.82 0.00 421609.62 25122.13 1048576.00 00:21:43.267 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme4n1 ended in about 1.95 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme4n1 : 1.95 126.60 7.91 32.80 0.00 386883.09 4068.69 1055566.51 00:21:43.267 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme5n1 ended in about 1.90 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme5n1 : 1.90 113.59 7.10 33.66 0.00 414049.43 39976.96 1083528.53 00:21:43.267 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme6n1 ended in about 1.91 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme6n1 : 1.91 113.34 7.08 33.58 0.00 410097.76 35389.44 1076538.03 00:21:43.267 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme7n1 ended in about 1.91 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme7n1 : 1.91 134.02 8.38 33.50 0.00 356163.97 5761.71 1076538.03 00:21:43.267 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme8n1 ended in about 1.92 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme8n1 : 1.92 133.33 8.33 33.33 0.00 353239.04 58545.49 1062557.01 00:21:43.267 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme9n1 ended in about 1.93 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme9n1 : 1.93 116.34 7.27 33.24 0.00 390255.88 49152.00 1055566.51 00:21:43.267 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:43.267 Job: Nvme10n1 ended in about 1.89 seconds with error 00:21:43.267 Verification LBA range: start 0x0 length 0x400 00:21:43.267 Nvme10n1 : 1.89 33.91 2.12 33.91 0.00 850430.29 86070.61 1083528.53 00:21:43.267 =================================================================================================================== 00:21:43.267 Total : 1103.64 68.98 332.55 0.00 421768.85 4068.69 1083528.53 00:21:43.267 [2024-07-15 15:03:59.073178] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:43.267 [2024-07-15 15:03:59.073200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:43.267 [2024-07-15 15:03:59.073211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:43.267 [2024-07-15 15:03:59.074619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:43.267 [2024-07-15 15:03:59.074634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:43.267 [2024-07-15 15:03:59.079330] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:43.267 [2024-07-15 15:03:59.079348] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:43.267 [2024-07-15 15:03:59.079354] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001927f440 00:21:43.267 [2024-07-15 15:03:59.087870] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:43.267 [2024-07-15 15:03:59.087889] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:43.267 [2024-07-15 15:03:59.087895] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5080 00:21:43.267 [2024-07-15 15:03:59.088159] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:43.267 [2024-07-15 15:03:59.088168] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:43.267 [2024-07-15 15:03:59.088174] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a5180 00:21:43.267 [2024-07-15 15:03:59.088409] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:43.267 [2024-07-15 15:03:59.088418] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:43.267 [2024-07-15 15:03:59.088424] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b1000 00:21:43.267 [2024-07-15 15:03:59.089635] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:43.267 [2024-07-15 15:03:59.089668] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:43.267 [2024-07-15 15:03:59.089686] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928b140 00:21:43.267 [2024-07-15 15:03:59.089963] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:43.267 [2024-07-15 15:03:59.089986] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:43.267 [2024-07-15 15:03:59.090002] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019298800 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1888565 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:43.267 rmmod nvme_rdma 00:21:43.267 rmmod nvme_fabrics 00:21:43.267 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 1888565 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:43.267 00:21:43.267 real 0m5.052s 00:21:43.267 user 0m17.096s 00:21:43.267 sys 0m1.009s 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.267 15:03:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.267 ************************************ 00:21:43.267 END TEST nvmf_shutdown_tc3 00:21:43.267 ************************************ 00:21:43.529 15:03:59 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:43.529 15:03:59 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:43.529 00:21:43.529 real 0m25.813s 00:21:43.529 user 1m10.905s 00:21:43.529 sys 0m9.166s 00:21:43.529 15:03:59 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.529 15:03:59 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:43.529 ************************************ 00:21:43.529 END TEST nvmf_shutdown 00:21:43.529 ************************************ 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:43.529 15:03:59 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:43.529 15:03:59 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:43.529 15:03:59 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:43.529 15:03:59 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.529 15:03:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:43.529 ************************************ 00:21:43.529 START TEST nvmf_multicontroller 00:21:43.529 ************************************ 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:43.529 * Looking for test storage... 00:21:43.529 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.529 15:03:59 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:43.791 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:21:43.791 00:21:43.791 real 0m0.124s 00:21:43.791 user 0m0.059s 00:21:43.791 sys 0m0.072s 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.791 15:03:59 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.791 ************************************ 00:21:43.791 END TEST nvmf_multicontroller 00:21:43.791 ************************************ 00:21:43.791 15:03:59 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:43.791 15:03:59 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:43.791 15:03:59 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:43.791 15:03:59 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.791 15:03:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:43.791 ************************************ 00:21:43.791 START TEST nvmf_aer 00:21:43.791 ************************************ 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:43.791 * Looking for test storage... 00:21:43.791 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.791 15:03:59 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:51.932 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:51.932 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:51.932 Found net devices under 0000:98:00.0: mlx_0_0 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:51.932 Found net devices under 0000:98:00.1: mlx_0_1 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:51.932 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:51.933 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:51.933 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:51.933 altname enp152s0f0np0 00:21:51.933 altname ens817f0np0 00:21:51.933 inet 192.168.100.8/24 scope global mlx_0_0 00:21:51.933 valid_lft forever preferred_lft forever 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:51.933 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:51.933 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:51.933 altname enp152s0f1np1 00:21:51.933 altname ens817f1np1 00:21:51.933 inet 192.168.100.9/24 scope global mlx_0_1 00:21:51.933 valid_lft forever preferred_lft forever 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:51.933 192.168.100.9' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:51.933 192.168.100.9' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:51.933 192.168.100.9' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1893437 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1893437 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1893437 ']' 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.933 15:04:07 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.933 [2024-07-15 15:04:07.849635] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:51.933 [2024-07-15 15:04:07.849692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.933 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.933 [2024-07-15 15:04:07.918781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.933 [2024-07-15 15:04:07.986896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.933 [2024-07-15 15:04:07.986935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.933 [2024-07-15 15:04:07.986942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.933 [2024-07-15 15:04:07.986949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.933 [2024-07-15 15:04:07.986954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.933 [2024-07-15 15:04:07.987264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.933 [2024-07-15 15:04:07.987441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.933 [2024-07-15 15:04:07.987441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.933 [2024-07-15 15:04:07.987282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.875 [2024-07-15 15:04:08.709311] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dd9200/0x1ddd6f0) succeed. 00:21:52.875 [2024-07-15 15:04:08.722479] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dda840/0x1e1ed80) succeed. 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.875 Malloc0 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.875 [2024-07-15 15:04:08.894914] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.875 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.875 [ 00:21:52.875 { 00:21:52.875 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.875 "subtype": "Discovery", 00:21:52.875 "listen_addresses": [], 00:21:52.875 "allow_any_host": true, 00:21:52.875 "hosts": [] 00:21:52.875 }, 00:21:52.875 { 00:21:52.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.875 "subtype": "NVMe", 00:21:52.875 "listen_addresses": [ 00:21:52.875 { 00:21:52.875 "trtype": "RDMA", 00:21:52.875 "adrfam": "IPv4", 00:21:52.875 "traddr": "192.168.100.8", 00:21:52.875 "trsvcid": "4420" 00:21:52.875 } 00:21:52.875 ], 00:21:52.875 "allow_any_host": true, 00:21:52.875 "hosts": [], 00:21:52.875 "serial_number": "SPDK00000000000001", 00:21:52.875 "model_number": "SPDK bdev Controller", 00:21:52.875 "max_namespaces": 2, 00:21:52.875 "min_cntlid": 1, 00:21:52.875 "max_cntlid": 65519, 00:21:52.875 "namespaces": [ 00:21:52.875 { 00:21:52.875 "nsid": 1, 00:21:52.875 "bdev_name": "Malloc0", 00:21:52.875 "name": "Malloc0", 00:21:52.876 "nguid": "D47FEC07E6B64AF482E5CFAFF1E6346C", 00:21:52.876 "uuid": "d47fec07-e6b6-4af4-82e5-cfaff1e6346c" 00:21:52.876 } 00:21:52.876 ] 00:21:52.876 } 00:21:52.876 ] 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=1893786 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:52.876 15:04:08 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:53.136 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.136 Malloc1 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.136 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.136 [ 00:21:53.136 { 00:21:53.136 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:53.136 "subtype": "Discovery", 00:21:53.136 "listen_addresses": [], 00:21:53.136 "allow_any_host": true, 00:21:53.136 "hosts": [] 00:21:53.136 }, 00:21:53.136 { 00:21:53.136 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.136 "subtype": "NVMe", 00:21:53.136 "listen_addresses": [ 00:21:53.136 { 00:21:53.136 "trtype": "RDMA", 00:21:53.136 "adrfam": "IPv4", 00:21:53.136 "traddr": "192.168.100.8", 00:21:53.136 "trsvcid": "4420" 00:21:53.136 } 00:21:53.136 ], 00:21:53.136 "allow_any_host": true, 00:21:53.136 "hosts": [], 00:21:53.136 "serial_number": "SPDK00000000000001", 00:21:53.137 "model_number": "SPDK bdev Controller", 00:21:53.137 "max_namespaces": 2, 00:21:53.137 "min_cntlid": 1, 00:21:53.137 "max_cntlid": 65519, 00:21:53.137 "namespaces": [ 00:21:53.137 { 00:21:53.137 "nsid": 1, 00:21:53.137 "bdev_name": "Malloc0", 00:21:53.137 "name": "Malloc0", 00:21:53.137 "nguid": "D47FEC07E6B64AF482E5CFAFF1E6346C", 00:21:53.137 "uuid": "d47fec07-e6b6-4af4-82e5-cfaff1e6346c" 00:21:53.137 }, 00:21:53.137 { 00:21:53.137 "nsid": 2, 00:21:53.137 "bdev_name": "Malloc1", 00:21:53.137 "name": "Malloc1", 00:21:53.137 "nguid": "125992C83B0144D3A9710901D5388C5F", 00:21:53.137 "uuid": "125992c8-3b01-44d3-a971-0901d5388c5f" 00:21:53.137 } 00:21:53.137 ] 00:21:53.137 } 00:21:53.137 ] 00:21:53.137 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.137 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 1893786 00:21:53.398 Asynchronous Event Request test 00:21:53.398 Attaching to 192.168.100.8 00:21:53.398 Attached to 192.168.100.8 00:21:53.398 Registering asynchronous event callbacks... 00:21:53.398 Starting namespace attribute notice tests for all controllers... 00:21:53.398 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:53.398 aer_cb - Changed Namespace 00:21:53.398 Cleaning up... 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:53.398 rmmod nvme_rdma 00:21:53.398 rmmod nvme_fabrics 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1893437 ']' 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1893437 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1893437 ']' 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1893437 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1893437 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1893437' 00:21:53.398 killing process with pid 1893437 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1893437 00:21:53.398 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1893437 00:21:53.659 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.659 15:04:09 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:53.659 00:21:53.659 real 0m9.938s 00:21:53.659 user 0m8.844s 00:21:53.659 sys 0m6.272s 00:21:53.659 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.659 15:04:09 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.659 ************************************ 00:21:53.659 END TEST nvmf_aer 00:21:53.659 ************************************ 00:21:53.659 15:04:09 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:53.659 15:04:09 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:53.659 15:04:09 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:53.659 15:04:09 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.659 15:04:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:53.659 ************************************ 00:21:53.659 START TEST nvmf_async_init 00:21:53.659 ************************************ 00:21:53.659 15:04:09 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:53.921 * Looking for test storage... 00:21:53.921 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bedbb9cbce484defb047f34392f00daa 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.921 15:04:09 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.129 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.129 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.129 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.129 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.129 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.129 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.129 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:02.130 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:02.130 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:02.130 Found net devices under 0000:98:00.0: mlx_0_0 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:02.130 Found net devices under 0000:98:00.1: mlx_0_1 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.130 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:02.131 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.131 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:02.131 altname enp152s0f0np0 00:22:02.131 altname ens817f0np0 00:22:02.131 inet 192.168.100.8/24 scope global mlx_0_0 00:22:02.131 valid_lft forever preferred_lft forever 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:02.131 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:02.131 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:02.131 altname enp152s0f1np1 00:22:02.131 altname ens817f1np1 00:22:02.131 inet 192.168.100.9/24 scope global mlx_0_1 00:22:02.131 valid_lft forever preferred_lft forever 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:02.131 192.168.100.9' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:02.131 192.168.100.9' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:02.131 192.168.100.9' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1898122 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1898122 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1898122 ']' 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.131 15:04:17 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.131 [2024-07-15 15:04:17.826998] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:02.131 [2024-07-15 15:04:17.827070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.131 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.131 [2024-07-15 15:04:17.897527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.131 [2024-07-15 15:04:17.970558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.131 [2024-07-15 15:04:17.970592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.131 [2024-07-15 15:04:17.970599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.131 [2024-07-15 15:04:17.970606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.131 [2024-07-15 15:04:17.970612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.131 [2024-07-15 15:04:17.970630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.704 [2024-07-15 15:04:18.646349] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfa1f90/0xfa6480) succeed. 00:22:02.704 [2024-07-15 15:04:18.658540] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfa3490/0xfe7b10) succeed. 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.704 null0 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bedbb9cbce484defb047f34392f00daa 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.704 [2024-07-15 15:04:18.743846] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.704 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 nvme0n1 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 [ 00:22:02.966 { 00:22:02.966 "name": "nvme0n1", 00:22:02.966 "aliases": [ 00:22:02.966 "bedbb9cb-ce48-4def-b047-f34392f00daa" 00:22:02.966 ], 00:22:02.966 "product_name": "NVMe disk", 00:22:02.966 "block_size": 512, 00:22:02.966 "num_blocks": 2097152, 00:22:02.966 "uuid": "bedbb9cb-ce48-4def-b047-f34392f00daa", 00:22:02.966 "assigned_rate_limits": { 00:22:02.966 "rw_ios_per_sec": 0, 00:22:02.966 "rw_mbytes_per_sec": 0, 00:22:02.966 "r_mbytes_per_sec": 0, 00:22:02.966 "w_mbytes_per_sec": 0 00:22:02.966 }, 00:22:02.966 "claimed": false, 00:22:02.966 "zoned": false, 00:22:02.966 "supported_io_types": { 00:22:02.966 "read": true, 00:22:02.966 "write": true, 00:22:02.966 "unmap": false, 00:22:02.966 "flush": true, 00:22:02.966 "reset": true, 00:22:02.966 "nvme_admin": true, 00:22:02.966 "nvme_io": true, 00:22:02.966 "nvme_io_md": false, 00:22:02.966 "write_zeroes": true, 00:22:02.966 "zcopy": false, 00:22:02.966 "get_zone_info": false, 00:22:02.966 "zone_management": false, 00:22:02.966 "zone_append": false, 00:22:02.966 "compare": true, 00:22:02.966 "compare_and_write": true, 00:22:02.966 "abort": true, 00:22:02.966 "seek_hole": false, 00:22:02.966 "seek_data": false, 00:22:02.966 "copy": true, 00:22:02.966 "nvme_iov_md": false 00:22:02.966 }, 00:22:02.966 "memory_domains": [ 00:22:02.966 { 00:22:02.966 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:02.966 "dma_device_type": 0 00:22:02.966 } 00:22:02.966 ], 00:22:02.966 "driver_specific": { 00:22:02.966 "nvme": [ 00:22:02.966 { 00:22:02.966 "trid": { 00:22:02.966 "trtype": "RDMA", 00:22:02.966 "adrfam": "IPv4", 00:22:02.966 "traddr": "192.168.100.8", 00:22:02.966 "trsvcid": "4420", 00:22:02.966 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.966 }, 00:22:02.966 "ctrlr_data": { 00:22:02.966 "cntlid": 1, 00:22:02.966 "vendor_id": "0x8086", 00:22:02.966 "model_number": "SPDK bdev Controller", 00:22:02.966 "serial_number": "00000000000000000000", 00:22:02.966 "firmware_revision": "24.09", 00:22:02.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.966 "oacs": { 00:22:02.966 "security": 0, 00:22:02.966 "format": 0, 00:22:02.966 "firmware": 0, 00:22:02.966 "ns_manage": 0 00:22:02.966 }, 00:22:02.966 "multi_ctrlr": true, 00:22:02.966 "ana_reporting": false 00:22:02.966 }, 00:22:02.966 "vs": { 00:22:02.966 "nvme_version": "1.3" 00:22:02.966 }, 00:22:02.966 "ns_data": { 00:22:02.966 "id": 1, 00:22:02.966 "can_share": true 00:22:02.966 } 00:22:02.966 } 00:22:02.966 ], 00:22:02.966 "mp_policy": "active_passive" 00:22:02.966 } 00:22:02.966 } 00:22:02.966 ] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 [2024-07-15 15:04:18.870563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:02.966 [2024-07-15 15:04:18.897450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:02.966 [2024-07-15 15:04:18.923558] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 [ 00:22:02.966 { 00:22:02.966 "name": "nvme0n1", 00:22:02.966 "aliases": [ 00:22:02.966 "bedbb9cb-ce48-4def-b047-f34392f00daa" 00:22:02.966 ], 00:22:02.966 "product_name": "NVMe disk", 00:22:02.966 "block_size": 512, 00:22:02.966 "num_blocks": 2097152, 00:22:02.966 "uuid": "bedbb9cb-ce48-4def-b047-f34392f00daa", 00:22:02.966 "assigned_rate_limits": { 00:22:02.966 "rw_ios_per_sec": 0, 00:22:02.966 "rw_mbytes_per_sec": 0, 00:22:02.966 "r_mbytes_per_sec": 0, 00:22:02.966 "w_mbytes_per_sec": 0 00:22:02.966 }, 00:22:02.966 "claimed": false, 00:22:02.966 "zoned": false, 00:22:02.966 "supported_io_types": { 00:22:02.966 "read": true, 00:22:02.966 "write": true, 00:22:02.966 "unmap": false, 00:22:02.966 "flush": true, 00:22:02.966 "reset": true, 00:22:02.966 "nvme_admin": true, 00:22:02.966 "nvme_io": true, 00:22:02.966 "nvme_io_md": false, 00:22:02.966 "write_zeroes": true, 00:22:02.966 "zcopy": false, 00:22:02.966 "get_zone_info": false, 00:22:02.966 "zone_management": false, 00:22:02.966 "zone_append": false, 00:22:02.966 "compare": true, 00:22:02.966 "compare_and_write": true, 00:22:02.966 "abort": true, 00:22:02.966 "seek_hole": false, 00:22:02.966 "seek_data": false, 00:22:02.966 "copy": true, 00:22:02.966 "nvme_iov_md": false 00:22:02.966 }, 00:22:02.966 "memory_domains": [ 00:22:02.966 { 00:22:02.966 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:02.966 "dma_device_type": 0 00:22:02.966 } 00:22:02.966 ], 00:22:02.966 "driver_specific": { 00:22:02.966 "nvme": [ 00:22:02.966 { 00:22:02.966 "trid": { 00:22:02.966 "trtype": "RDMA", 00:22:02.966 "adrfam": "IPv4", 00:22:02.966 "traddr": "192.168.100.8", 00:22:02.966 "trsvcid": "4420", 00:22:02.966 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.966 }, 00:22:02.966 "ctrlr_data": { 00:22:02.966 "cntlid": 2, 00:22:02.966 "vendor_id": "0x8086", 00:22:02.966 "model_number": "SPDK bdev Controller", 00:22:02.966 "serial_number": "00000000000000000000", 00:22:02.966 "firmware_revision": "24.09", 00:22:02.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.966 "oacs": { 00:22:02.966 "security": 0, 00:22:02.966 "format": 0, 00:22:02.966 "firmware": 0, 00:22:02.966 "ns_manage": 0 00:22:02.966 }, 00:22:02.966 "multi_ctrlr": true, 00:22:02.966 "ana_reporting": false 00:22:02.966 }, 00:22:02.966 "vs": { 00:22:02.966 "nvme_version": "1.3" 00:22:02.966 }, 00:22:02.966 "ns_data": { 00:22:02.966 "id": 1, 00:22:02.966 "can_share": true 00:22:02.966 } 00:22:02.966 } 00:22:02.966 ], 00:22:02.966 "mp_policy": "active_passive" 00:22:02.966 } 00:22:02.966 } 00:22:02.966 ] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.iB9CpaOv1A 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.iB9CpaOv1A 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:18 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 [2024-07-15 15:04:19.000960] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iB9CpaOv1A 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iB9CpaOv1A 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.966 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.967 [2024-07-15 15:04:19.017004] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.228 nvme0n1 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 [ 00:22:03.228 { 00:22:03.228 "name": "nvme0n1", 00:22:03.228 "aliases": [ 00:22:03.228 "bedbb9cb-ce48-4def-b047-f34392f00daa" 00:22:03.228 ], 00:22:03.228 "product_name": "NVMe disk", 00:22:03.228 "block_size": 512, 00:22:03.228 "num_blocks": 2097152, 00:22:03.228 "uuid": "bedbb9cb-ce48-4def-b047-f34392f00daa", 00:22:03.228 "assigned_rate_limits": { 00:22:03.228 "rw_ios_per_sec": 0, 00:22:03.228 "rw_mbytes_per_sec": 0, 00:22:03.228 "r_mbytes_per_sec": 0, 00:22:03.228 "w_mbytes_per_sec": 0 00:22:03.228 }, 00:22:03.228 "claimed": false, 00:22:03.228 "zoned": false, 00:22:03.228 "supported_io_types": { 00:22:03.228 "read": true, 00:22:03.228 "write": true, 00:22:03.228 "unmap": false, 00:22:03.228 "flush": true, 00:22:03.228 "reset": true, 00:22:03.228 "nvme_admin": true, 00:22:03.228 "nvme_io": true, 00:22:03.228 "nvme_io_md": false, 00:22:03.228 "write_zeroes": true, 00:22:03.228 "zcopy": false, 00:22:03.228 "get_zone_info": false, 00:22:03.228 "zone_management": false, 00:22:03.228 "zone_append": false, 00:22:03.228 "compare": true, 00:22:03.228 "compare_and_write": true, 00:22:03.228 "abort": true, 00:22:03.228 "seek_hole": false, 00:22:03.228 "seek_data": false, 00:22:03.228 "copy": true, 00:22:03.228 "nvme_iov_md": false 00:22:03.228 }, 00:22:03.228 "memory_domains": [ 00:22:03.228 { 00:22:03.228 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:03.228 "dma_device_type": 0 00:22:03.228 } 00:22:03.228 ], 00:22:03.228 "driver_specific": { 00:22:03.228 "nvme": [ 00:22:03.228 { 00:22:03.228 "trid": { 00:22:03.228 "trtype": "RDMA", 00:22:03.228 "adrfam": "IPv4", 00:22:03.228 "traddr": "192.168.100.8", 00:22:03.228 "trsvcid": "4421", 00:22:03.228 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:03.228 }, 00:22:03.228 "ctrlr_data": { 00:22:03.228 "cntlid": 3, 00:22:03.228 "vendor_id": "0x8086", 00:22:03.228 "model_number": "SPDK bdev Controller", 00:22:03.228 "serial_number": "00000000000000000000", 00:22:03.228 "firmware_revision": "24.09", 00:22:03.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.228 "oacs": { 00:22:03.228 "security": 0, 00:22:03.228 "format": 0, 00:22:03.228 "firmware": 0, 00:22:03.228 "ns_manage": 0 00:22:03.228 }, 00:22:03.228 "multi_ctrlr": true, 00:22:03.228 "ana_reporting": false 00:22:03.228 }, 00:22:03.228 "vs": { 00:22:03.228 "nvme_version": "1.3" 00:22:03.228 }, 00:22:03.228 "ns_data": { 00:22:03.228 "id": 1, 00:22:03.228 "can_share": true 00:22:03.228 } 00:22:03.228 } 00:22:03.228 ], 00:22:03.228 "mp_policy": "active_passive" 00:22:03.228 } 00:22:03.228 } 00:22:03.228 ] 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.iB9CpaOv1A 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:03.228 rmmod nvme_rdma 00:22:03.228 rmmod nvme_fabrics 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1898122 ']' 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1898122 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1898122 ']' 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1898122 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1898122 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1898122' 00:22:03.228 killing process with pid 1898122 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1898122 00:22:03.228 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1898122 00:22:03.490 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.490 15:04:19 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:03.490 00:22:03.490 real 0m9.761s 00:22:03.490 user 0m4.018s 00:22:03.490 sys 0m6.262s 00:22:03.490 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:03.490 15:04:19 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:03.490 ************************************ 00:22:03.490 END TEST nvmf_async_init 00:22:03.490 ************************************ 00:22:03.490 15:04:19 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:03.490 15:04:19 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:03.490 15:04:19 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:03.490 15:04:19 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.490 15:04:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:03.490 ************************************ 00:22:03.490 START TEST dma 00:22:03.490 ************************************ 00:22:03.490 15:04:19 nvmf_rdma.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:03.752 * Looking for test storage... 00:22:03.752 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:03.752 15:04:19 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:03.752 15:04:19 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.752 15:04:19 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.752 15:04:19 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.752 15:04:19 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.752 15:04:19 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.752 15:04:19 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.752 15:04:19 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:22:03.752 15:04:19 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.752 15:04:19 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:22:03.752 15:04:19 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:22:03.752 15:04:19 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:22:03.752 15:04:19 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:22:03.752 15:04:19 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.752 15:04:19 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.752 15:04:19 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.752 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:03.753 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:03.753 15:04:19 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.753 15:04:19 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:11.900 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:11.900 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:11.900 Found net devices under 0000:98:00.0: mlx_0_0 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:11.900 Found net devices under 0000:98:00.1: mlx_0_1 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:11.900 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:11.901 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:11.901 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:11.901 altname enp152s0f0np0 00:22:11.901 altname ens817f0np0 00:22:11.901 inet 192.168.100.8/24 scope global mlx_0_0 00:22:11.901 valid_lft forever preferred_lft forever 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:11.901 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:11.901 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:11.901 altname enp152s0f1np1 00:22:11.901 altname ens817f1np1 00:22:11.901 inet 192.168.100.9/24 scope global mlx_0_1 00:22:11.901 valid_lft forever preferred_lft forever 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:11.901 192.168.100.9' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:11.901 192.168.100.9' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:11.901 192.168.100.9' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:11.901 15:04:27 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=1902636 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 1902636 00:22:11.901 15:04:27 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@829 -- # '[' -z 1902636 ']' 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.901 15:04:27 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:11.901 [2024-07-15 15:04:27.797171] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:11.901 [2024-07-15 15:04:27.797224] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.901 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.901 [2024-07-15 15:04:27.866269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:11.901 [2024-07-15 15:04:27.931017] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.901 [2024-07-15 15:04:27.931052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.901 [2024-07-15 15:04:27.931060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.901 [2024-07-15 15:04:27.931067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.901 [2024-07-15 15:04:27.931072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.901 [2024-07-15 15:04:27.931209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.901 [2024-07-15 15:04:27.931211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@862 -- # return 0 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.843 15:04:28 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:12.843 [2024-07-15 15:04:28.652769] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaa2b70/0xaa7060) succeed. 00:22:12.843 [2024-07-15 15:04:28.666012] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xaa4070/0xae86f0) succeed. 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.843 15:04:28 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:12.843 Malloc0 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.843 15:04:28 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.843 15:04:28 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.843 15:04:28 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:12.843 [2024-07-15 15:04:28.801260] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:12.843 15:04:28 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.843 15:04:28 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:22:12.843 15:04:28 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.843 { 00:22:12.843 "params": { 00:22:12.843 "name": "Nvme$subsystem", 00:22:12.843 "trtype": "$TEST_TRANSPORT", 00:22:12.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.843 "adrfam": "ipv4", 00:22:12.843 "trsvcid": "$NVMF_PORT", 00:22:12.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.843 "hdgst": ${hdgst:-false}, 00:22:12.843 "ddgst": ${ddgst:-false} 00:22:12.843 }, 00:22:12.843 "method": "bdev_nvme_attach_controller" 00:22:12.843 } 00:22:12.843 EOF 00:22:12.843 )") 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:22:12.843 15:04:28 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:12.843 "params": { 00:22:12.843 "name": "Nvme0", 00:22:12.843 "trtype": "rdma", 00:22:12.843 "traddr": "192.168.100.8", 00:22:12.843 "adrfam": "ipv4", 00:22:12.843 "trsvcid": "4420", 00:22:12.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:12.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:12.843 "hdgst": false, 00:22:12.843 "ddgst": false 00:22:12.843 }, 00:22:12.843 "method": "bdev_nvme_attach_controller" 00:22:12.843 }' 00:22:12.843 [2024-07-15 15:04:28.860063] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:12.843 [2024-07-15 15:04:28.860159] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902837 ] 00:22:12.843 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.103 [2024-07-15 15:04:28.919021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:13.103 [2024-07-15 15:04:28.971668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.103 [2024-07-15 15:04:28.971668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.445 bdev Nvme0n1 reports 1 memory domains 00:22:18.445 bdev Nvme0n1 supports RDMA memory domain 00:22:18.445 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:18.445 ========================================================================== 00:22:18.445 Latency [us] 00:22:18.445 IOPS MiB/s Average min max 00:22:18.445 Core 2: 23937.57 93.51 667.85 295.18 9634.62 00:22:18.445 Core 3: 27907.57 109.01 572.71 182.73 9822.29 00:22:18.445 ========================================================================== 00:22:18.445 Total : 51845.14 202.52 616.63 182.73 9822.29 00:22:18.445 00:22:18.445 Total operations: 259278, translate 259278 pull_push 0 memzero 0 00:22:18.445 15:04:34 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:22:18.445 15:04:34 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:22:18.445 15:04:34 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:22:18.445 [2024-07-15 15:04:34.337710] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:18.445 [2024-07-15 15:04:34.337771] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903849 ] 00:22:18.445 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.445 [2024-07-15 15:04:34.393575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:18.445 [2024-07-15 15:04:34.446859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.445 [2024-07-15 15:04:34.446859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.738 bdev Malloc0 reports 2 memory domains 00:22:23.738 bdev Malloc0 doesn't support RDMA memory domain 00:22:23.738 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:23.738 ========================================================================== 00:22:23.738 Latency [us] 00:22:23.738 IOPS MiB/s Average min max 00:22:23.738 Core 2: 19021.57 74.30 840.56 311.00 1351.13 00:22:23.738 Core 3: 19109.15 74.65 836.70 308.77 1455.33 00:22:23.738 ========================================================================== 00:22:23.738 Total : 38130.71 148.95 838.63 308.77 1455.33 00:22:23.738 00:22:23.738 Total operations: 190702, translate 0 pull_push 762808 memzero 0 00:22:23.738 15:04:39 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:22:23.738 15:04:39 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:22:23.738 15:04:39 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:22:23.738 15:04:39 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:22:23.738 Ignoring -M option 00:22:23.738 [2024-07-15 15:04:39.709255] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:23.738 [2024-07-15 15:04:39.709318] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904894 ] 00:22:23.738 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.738 [2024-07-15 15:04:39.764344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:24.000 [2024-07-15 15:04:39.816864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.000 [2024-07-15 15:04:39.816865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.287 bdev f97925b5-7160-472a-b9a3-042ae42df775 reports 1 memory domains 00:22:29.287 bdev f97925b5-7160-472a-b9a3-042ae42df775 supports RDMA memory domain 00:22:29.287 Initialization complete, running randread IO for 5 sec on 2 cores 00:22:29.287 ========================================================================== 00:22:29.287 Latency [us] 00:22:29.287 IOPS MiB/s Average min max 00:22:29.287 Core 2: 131956.75 515.46 120.77 53.35 3371.68 00:22:29.287 Core 3: 137376.62 536.63 115.99 47.02 3334.71 00:22:29.287 ========================================================================== 00:22:29.287 Total : 269333.37 1052.08 118.33 47.02 3371.68 00:22:29.287 00:22:29.287 Total operations: 1346749, translate 0 pull_push 0 memzero 1346749 00:22:29.287 15:04:45 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:22:29.287 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.287 [2024-07-15 15:04:45.290535] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:31.829 Initializing NVMe Controllers 00:22:31.829 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:22:31.829 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:22:31.829 Initialization complete. Launching workers. 00:22:31.829 ======================================================== 00:22:31.829 Latency(us) 00:22:31.829 Device Information : IOPS MiB/s Average min max 00:22:31.829 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.87 7972.82 5987.32 8978.62 00:22:31.829 ======================================================== 00:22:31.829 Total : 2016.00 7.87 7972.82 5987.32 8978.62 00:22:31.829 00:22:31.830 15:04:47 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:22:31.830 15:04:47 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:22:31.830 15:04:47 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:22:31.830 15:04:47 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:22:31.830 [2024-07-15 15:04:47.672248] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:31.830 [2024-07-15 15:04:47.672296] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906518 ] 00:22:31.830 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.830 [2024-07-15 15:04:47.727391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:31.830 [2024-07-15 15:04:47.780347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.830 [2024-07-15 15:04:47.780469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.111 bdev 84e7e8ae-debe-464d-bc70-bea2d0017b28 reports 1 memory domains 00:22:37.111 bdev 84e7e8ae-debe-464d-bc70-bea2d0017b28 supports RDMA memory domain 00:22:37.111 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:37.111 ========================================================================== 00:22:37.111 Latency [us] 00:22:37.111 IOPS MiB/s Average min max 00:22:37.111 Core 2: 21382.72 83.53 747.77 9.46 18259.33 00:22:37.111 Core 3: 27528.98 107.54 580.69 7.36 15746.75 00:22:37.111 ========================================================================== 00:22:37.111 Total : 48911.71 191.06 653.73 7.36 18259.33 00:22:37.111 00:22:37.111 Total operations: 244580, translate 244430 pull_push 0 memzero 150 00:22:37.111 15:04:53 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:22:37.111 15:04:53 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:22:37.111 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:37.111 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:22:37.111 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:37.111 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:37.111 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:22:37.111 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.111 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:37.111 rmmod nvme_rdma 00:22:37.111 rmmod nvme_fabrics 00:22:37.372 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.372 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:22:37.372 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:22:37.372 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 1902636 ']' 00:22:37.372 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 1902636 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@948 -- # '[' -z 1902636 ']' 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # kill -0 1902636 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # uname 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1902636 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1902636' 00:22:37.372 killing process with pid 1902636 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # kill 1902636 00:22:37.372 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@972 -- # wait 1902636 00:22:37.633 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.633 15:04:53 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:37.633 00:22:37.633 real 0m33.950s 00:22:37.633 user 1m35.672s 00:22:37.633 sys 0m6.916s 00:22:37.633 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.633 15:04:53 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:37.633 ************************************ 00:22:37.633 END TEST dma 00:22:37.633 ************************************ 00:22:37.633 15:04:53 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:37.633 15:04:53 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:22:37.633 15:04:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.633 15:04:53 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.633 15:04:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:37.633 ************************************ 00:22:37.634 START TEST nvmf_identify 00:22:37.634 ************************************ 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:22:37.634 * Looking for test storage... 00:22:37.634 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.634 15:04:53 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:45.774 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:45.774 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:45.774 Found net devices under 0000:98:00.0: mlx_0_0 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:45.774 Found net devices under 0000:98:00.1: mlx_0_1 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.774 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:45.775 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:45.775 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:45.775 altname enp152s0f0np0 00:22:45.775 altname ens817f0np0 00:22:45.775 inet 192.168.100.8/24 scope global mlx_0_0 00:22:45.775 valid_lft forever preferred_lft forever 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:45.775 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:45.775 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:45.775 altname enp152s0f1np1 00:22:45.775 altname ens817f1np1 00:22:45.775 inet 192.168.100.9/24 scope global mlx_0_1 00:22:45.775 valid_lft forever preferred_lft forever 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:45.775 192.168.100.9' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:45.775 192.168.100.9' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:45.775 192.168.100.9' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1911849 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1911849 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1911849 ']' 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.775 15:05:01 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:45.775 [2024-07-15 15:05:01.716140] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:45.775 [2024-07-15 15:05:01.716208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.775 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.775 [2024-07-15 15:05:01.787896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.037 [2024-07-15 15:05:01.863756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.037 [2024-07-15 15:05:01.863795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.037 [2024-07-15 15:05:01.863803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.037 [2024-07-15 15:05:01.863809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.037 [2024-07-15 15:05:01.863814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.037 [2024-07-15 15:05:01.863954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.037 [2024-07-15 15:05:01.863973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.037 [2024-07-15 15:05:01.864096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.037 [2024-07-15 15:05:01.864097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.608 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.608 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:46.608 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:46.608 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.608 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.608 [2024-07-15 15:05:02.546666] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2228200/0x222c6f0) succeed. 00:22:46.608 [2024-07-15 15:05:02.563332] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2229840/0x226dd80) succeed. 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.869 Malloc0 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.869 [2024-07-15 15:05:02.779026] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:46.869 [ 00:22:46.869 { 00:22:46.869 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:46.869 "subtype": "Discovery", 00:22:46.869 "listen_addresses": [ 00:22:46.869 { 00:22:46.869 "trtype": "RDMA", 00:22:46.869 "adrfam": "IPv4", 00:22:46.869 "traddr": "192.168.100.8", 00:22:46.869 "trsvcid": "4420" 00:22:46.869 } 00:22:46.869 ], 00:22:46.869 "allow_any_host": true, 00:22:46.869 "hosts": [] 00:22:46.869 }, 00:22:46.869 { 00:22:46.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.869 "subtype": "NVMe", 00:22:46.869 "listen_addresses": [ 00:22:46.869 { 00:22:46.869 "trtype": "RDMA", 00:22:46.869 "adrfam": "IPv4", 00:22:46.869 "traddr": "192.168.100.8", 00:22:46.869 "trsvcid": "4420" 00:22:46.869 } 00:22:46.869 ], 00:22:46.869 "allow_any_host": true, 00:22:46.869 "hosts": [], 00:22:46.869 "serial_number": "SPDK00000000000001", 00:22:46.869 "model_number": "SPDK bdev Controller", 00:22:46.869 "max_namespaces": 32, 00:22:46.869 "min_cntlid": 1, 00:22:46.869 "max_cntlid": 65519, 00:22:46.869 "namespaces": [ 00:22:46.869 { 00:22:46.869 "nsid": 1, 00:22:46.869 "bdev_name": "Malloc0", 00:22:46.869 "name": "Malloc0", 00:22:46.869 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:46.869 "eui64": "ABCDEF0123456789", 00:22:46.869 "uuid": "92fa6280-afa6-44d7-bd7a-305109e5a0f1" 00:22:46.869 } 00:22:46.869 ] 00:22:46.869 } 00:22:46.869 ] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.869 15:05:02 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:46.870 [2024-07-15 15:05:02.840319] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:46.870 [2024-07-15 15:05:02.840364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912052 ] 00:22:46.870 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.870 [2024-07-15 15:05:02.896723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:46.870 [2024-07-15 15:05:02.896812] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:46.870 [2024-07-15 15:05:02.896826] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:46.870 [2024-07-15 15:05:02.896830] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:46.870 [2024-07-15 15:05:02.896859] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:46.870 [2024-07-15 15:05:02.910050] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:46.870 [2024-07-15 15:05:02.927718] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:46.870 [2024-07-15 15:05:02.927728] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:46.870 [2024-07-15 15:05:02.927736] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927742] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927747] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927752] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927758] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927763] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927768] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927773] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927778] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927787] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927792] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927797] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927802] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927808] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927813] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927818] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927823] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927828] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927833] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927838] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927843] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927849] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927854] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927859] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927864] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927869] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927874] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927879] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927885] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927890] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927895] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927899] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:46.870 [2024-07-15 15:05:02.927904] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:46.870 [2024-07-15 15:05:02.927908] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:46.870 [2024-07-15 15:05:02.927926] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:46.870 [2024-07-15 15:05:02.927939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181200 00:22:47.139 [2024-07-15 15:05:02.934239] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934259] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934266] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:47.139 [2024-07-15 15:05:02.934273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:47.139 [2024-07-15 15:05:02.934281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:47.139 [2024-07-15 15:05:02.934294] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.139 [2024-07-15 15:05:02.934321] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:47.139 [2024-07-15 15:05:02.934336] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934341] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:47.139 [2024-07-15 15:05:02.934348] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.139 [2024-07-15 15:05:02.934377] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934388] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:47.139 [2024-07-15 15:05:02.934392] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:47.139 [2024-07-15 15:05:02.934405] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.139 [2024-07-15 15:05:02.934430] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:47.139 [2024-07-15 15:05:02.934445] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934453] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.139 [2024-07-15 15:05:02.934488] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934497] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:47.139 [2024-07-15 15:05:02.934502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:47.139 [2024-07-15 15:05:02.934510] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:47.139 [2024-07-15 15:05:02.934622] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:47.139 [2024-07-15 15:05:02.934626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:47.139 [2024-07-15 15:05:02.934635] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.139 [2024-07-15 15:05:02.934672] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:47.139 [2024-07-15 15:05:02.934686] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934694] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.139 [2024-07-15 15:05:02.934723] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934733] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:47.139 [2024-07-15 15:05:02.934738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:47.139 [2024-07-15 15:05:02.934742] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934748] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:47.139 [2024-07-15 15:05:02.934760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:47.139 [2024-07-15 15:05:02.934769] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.139 [2024-07-15 15:05:02.934776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181200 00:22:47.139 [2024-07-15 15:05:02.934818] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.139 [2024-07-15 15:05:02.934823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:47.139 [2024-07-15 15:05:02.934830] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:47.139 [2024-07-15 15:05:02.934835] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:47.139 [2024-07-15 15:05:02.934839] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:47.139 [2024-07-15 15:05:02.934845] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:47.139 [2024-07-15 15:05:02.934851] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:47.139 [2024-07-15 15:05:02.934855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:47.140 [2024-07-15 15:05:02.934860] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:47.140 [2024-07-15 15:05:02.934873] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.140 [2024-07-15 15:05:02.934908] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.140 [2024-07-15 15:05:02.934913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:47.140 [2024-07-15 15:05:02.934921] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.140 [2024-07-15 15:05:02.934934] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.140 [2024-07-15 15:05:02.934945] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.140 [2024-07-15 15:05:02.934957] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.140 [2024-07-15 15:05:02.934967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:47.140 [2024-07-15 15:05:02.934972] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:47.140 [2024-07-15 15:05:02.934988] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.934995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.140 [2024-07-15 15:05:02.935013] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.140 [2024-07-15 15:05:02.935018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:47.140 [2024-07-15 15:05:02.935023] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:47.140 [2024-07-15 15:05:02.935030] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:47.140 [2024-07-15 15:05:02.935035] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935044] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181200 00:22:47.140 [2024-07-15 15:05:02.935080] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.140 [2024-07-15 15:05:02.935084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:47.140 [2024-07-15 15:05:02.935090] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935099] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:47.140 [2024-07-15 15:05:02.935121] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x181200 00:22:47.140 [2024-07-15 15:05:02.935135] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.140 [2024-07-15 15:05:02.935160] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.140 [2024-07-15 15:05:02.935165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:47.140 [2024-07-15 15:05:02.935176] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x181200 00:22:47.140 [2024-07-15 15:05:02.935187] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935192] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.140 [2024-07-15 15:05:02.935197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:47.140 [2024-07-15 15:05:02.935202] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935227] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.140 [2024-07-15 15:05:02.935238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:47.140 [2024-07-15 15:05:02.935247] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x181200 00:22:47.140 [2024-07-15 15:05:02.935258] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181200 00:22:47.140 [2024-07-15 15:05:02.935286] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.140 [2024-07-15 15:05:02.935291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:47.140 [2024-07-15 15:05:02.935300] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181200 00:22:47.140 ===================================================== 00:22:47.140 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:47.140 ===================================================== 00:22:47.140 Controller Capabilities/Features 00:22:47.140 ================================ 00:22:47.140 Vendor ID: 0000 00:22:47.140 Subsystem Vendor ID: 0000 00:22:47.140 Serial Number: .................... 00:22:47.140 Model Number: ........................................ 00:22:47.140 Firmware Version: 24.09 00:22:47.140 Recommended Arb Burst: 0 00:22:47.140 IEEE OUI Identifier: 00 00 00 00:22:47.140 Multi-path I/O 00:22:47.140 May have multiple subsystem ports: No 00:22:47.140 May have multiple controllers: No 00:22:47.140 Associated with SR-IOV VF: No 00:22:47.140 Max Data Transfer Size: 131072 00:22:47.140 Max Number of Namespaces: 0 00:22:47.140 Max Number of I/O Queues: 1024 00:22:47.140 NVMe Specification Version (VS): 1.3 00:22:47.140 NVMe Specification Version (Identify): 1.3 00:22:47.140 Maximum Queue Entries: 128 00:22:47.140 Contiguous Queues Required: Yes 00:22:47.140 Arbitration Mechanisms Supported 00:22:47.140 Weighted Round Robin: Not Supported 00:22:47.140 Vendor Specific: Not Supported 00:22:47.140 Reset Timeout: 15000 ms 00:22:47.140 Doorbell Stride: 4 bytes 00:22:47.140 NVM Subsystem Reset: Not Supported 00:22:47.140 Command Sets Supported 00:22:47.140 NVM Command Set: Supported 00:22:47.140 Boot Partition: Not Supported 00:22:47.140 Memory Page Size Minimum: 4096 bytes 00:22:47.140 Memory Page Size Maximum: 4096 bytes 00:22:47.140 Persistent Memory Region: Not Supported 00:22:47.140 Optional Asynchronous Events Supported 00:22:47.140 Namespace Attribute Notices: Not Supported 00:22:47.140 Firmware Activation Notices: Not Supported 00:22:47.140 ANA Change Notices: Not Supported 00:22:47.140 PLE Aggregate Log Change Notices: Not Supported 00:22:47.140 LBA Status Info Alert Notices: Not Supported 00:22:47.140 EGE Aggregate Log Change Notices: Not Supported 00:22:47.140 Normal NVM Subsystem Shutdown event: Not Supported 00:22:47.140 Zone Descriptor Change Notices: Not Supported 00:22:47.140 Discovery Log Change Notices: Supported 00:22:47.140 Controller Attributes 00:22:47.140 128-bit Host Identifier: Not Supported 00:22:47.140 Non-Operational Permissive Mode: Not Supported 00:22:47.140 NVM Sets: Not Supported 00:22:47.140 Read Recovery Levels: Not Supported 00:22:47.140 Endurance Groups: Not Supported 00:22:47.140 Predictable Latency Mode: Not Supported 00:22:47.140 Traffic Based Keep ALive: Not Supported 00:22:47.140 Namespace Granularity: Not Supported 00:22:47.140 SQ Associations: Not Supported 00:22:47.140 UUID List: Not Supported 00:22:47.140 Multi-Domain Subsystem: Not Supported 00:22:47.140 Fixed Capacity Management: Not Supported 00:22:47.140 Variable Capacity Management: Not Supported 00:22:47.140 Delete Endurance Group: Not Supported 00:22:47.140 Delete NVM Set: Not Supported 00:22:47.140 Extended LBA Formats Supported: Not Supported 00:22:47.140 Flexible Data Placement Supported: Not Supported 00:22:47.140 00:22:47.140 Controller Memory Buffer Support 00:22:47.140 ================================ 00:22:47.140 Supported: No 00:22:47.140 00:22:47.140 Persistent Memory Region Support 00:22:47.140 ================================ 00:22:47.140 Supported: No 00:22:47.140 00:22:47.140 Admin Command Set Attributes 00:22:47.140 ============================ 00:22:47.140 Security Send/Receive: Not Supported 00:22:47.140 Format NVM: Not Supported 00:22:47.140 Firmware Activate/Download: Not Supported 00:22:47.140 Namespace Management: Not Supported 00:22:47.140 Device Self-Test: Not Supported 00:22:47.140 Directives: Not Supported 00:22:47.140 NVMe-MI: Not Supported 00:22:47.140 Virtualization Management: Not Supported 00:22:47.140 Doorbell Buffer Config: Not Supported 00:22:47.140 Get LBA Status Capability: Not Supported 00:22:47.140 Command & Feature Lockdown Capability: Not Supported 00:22:47.140 Abort Command Limit: 1 00:22:47.140 Async Event Request Limit: 4 00:22:47.140 Number of Firmware Slots: N/A 00:22:47.140 Firmware Slot 1 Read-Only: N/A 00:22:47.140 Firmware Activation Without Reset: N/A 00:22:47.141 Multiple Update Detection Support: N/A 00:22:47.141 Firmware Update Granularity: No Information Provided 00:22:47.141 Per-Namespace SMART Log: No 00:22:47.141 Asymmetric Namespace Access Log Page: Not Supported 00:22:47.141 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:47.141 Command Effects Log Page: Not Supported 00:22:47.141 Get Log Page Extended Data: Supported 00:22:47.141 Telemetry Log Pages: Not Supported 00:22:47.141 Persistent Event Log Pages: Not Supported 00:22:47.141 Supported Log Pages Log Page: May Support 00:22:47.141 Commands Supported & Effects Log Page: Not Supported 00:22:47.141 Feature Identifiers & Effects Log Page:May Support 00:22:47.141 NVMe-MI Commands & Effects Log Page: May Support 00:22:47.141 Data Area 4 for Telemetry Log: Not Supported 00:22:47.141 Error Log Page Entries Supported: 128 00:22:47.141 Keep Alive: Not Supported 00:22:47.141 00:22:47.141 NVM Command Set Attributes 00:22:47.141 ========================== 00:22:47.141 Submission Queue Entry Size 00:22:47.141 Max: 1 00:22:47.141 Min: 1 00:22:47.141 Completion Queue Entry Size 00:22:47.141 Max: 1 00:22:47.141 Min: 1 00:22:47.141 Number of Namespaces: 0 00:22:47.141 Compare Command: Not Supported 00:22:47.141 Write Uncorrectable Command: Not Supported 00:22:47.141 Dataset Management Command: Not Supported 00:22:47.141 Write Zeroes Command: Not Supported 00:22:47.141 Set Features Save Field: Not Supported 00:22:47.141 Reservations: Not Supported 00:22:47.141 Timestamp: Not Supported 00:22:47.141 Copy: Not Supported 00:22:47.141 Volatile Write Cache: Not Present 00:22:47.141 Atomic Write Unit (Normal): 1 00:22:47.141 Atomic Write Unit (PFail): 1 00:22:47.141 Atomic Compare & Write Unit: 1 00:22:47.141 Fused Compare & Write: Supported 00:22:47.141 Scatter-Gather List 00:22:47.141 SGL Command Set: Supported 00:22:47.141 SGL Keyed: Supported 00:22:47.141 SGL Bit Bucket Descriptor: Not Supported 00:22:47.141 SGL Metadata Pointer: Not Supported 00:22:47.141 Oversized SGL: Not Supported 00:22:47.141 SGL Metadata Address: Not Supported 00:22:47.141 SGL Offset: Supported 00:22:47.141 Transport SGL Data Block: Not Supported 00:22:47.141 Replay Protected Memory Block: Not Supported 00:22:47.141 00:22:47.141 Firmware Slot Information 00:22:47.141 ========================= 00:22:47.141 Active slot: 0 00:22:47.141 00:22:47.141 00:22:47.141 Error Log 00:22:47.141 ========= 00:22:47.141 00:22:47.141 Active Namespaces 00:22:47.141 ================= 00:22:47.141 Discovery Log Page 00:22:47.141 ================== 00:22:47.141 Generation Counter: 2 00:22:47.141 Number of Records: 2 00:22:47.141 Record Format: 0 00:22:47.141 00:22:47.141 Discovery Log Entry 0 00:22:47.141 ---------------------- 00:22:47.141 Transport Type: 1 (RDMA) 00:22:47.141 Address Family: 1 (IPv4) 00:22:47.141 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:47.141 Entry Flags: 00:22:47.141 Duplicate Returned Information: 1 00:22:47.141 Explicit Persistent Connection Support for Discovery: 1 00:22:47.141 Transport Requirements: 00:22:47.141 Secure Channel: Not Required 00:22:47.141 Port ID: 0 (0x0000) 00:22:47.141 Controller ID: 65535 (0xffff) 00:22:47.141 Admin Max SQ Size: 128 00:22:47.141 Transport Service Identifier: 4420 00:22:47.141 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:47.141 Transport Address: 192.168.100.8 00:22:47.141 Transport Specific Address Subtype - RDMA 00:22:47.141 RDMA QP Service Type: 1 (Reliable Connected) 00:22:47.141 RDMA Provider Type: 1 (No provider specified) 00:22:47.141 RDMA CM Service: 1 (RDMA_CM) 00:22:47.141 Discovery Log Entry 1 00:22:47.141 ---------------------- 00:22:47.141 Transport Type: 1 (RDMA) 00:22:47.141 Address Family: 1 (IPv4) 00:22:47.141 Subsystem Type: 2 (NVM Subsystem) 00:22:47.141 Entry Flags: 00:22:47.141 Duplicate Returned Information: 0 00:22:47.141 Explicit Persistent Connection Support for Discovery: 0 00:22:47.141 Transport Requirements: 00:22:47.141 Secure Channel: Not Required 00:22:47.141 Port ID: 0 (0x0000) 00:22:47.141 Controller ID: 65535 (0xffff) 00:22:47.141 Admin Max SQ Size: [2024-07-15 15:05:02.935375] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:47.141 [2024-07-15 15:05:02.935384] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59819 doesn't match qid 00:22:47.141 [2024-07-15 15:05:02.935398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32514 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935405] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59819 doesn't match qid 00:22:47.141 [2024-07-15 15:05:02.935411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32514 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935417] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59819 doesn't match qid 00:22:47.141 [2024-07-15 15:05:02.935423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32514 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935428] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59819 doesn't match qid 00:22:47.141 [2024-07-15 15:05:02.935434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32514 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935442] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.141 [2024-07-15 15:05:02.935478] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.141 [2024-07-15 15:05:02.935483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935490] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.141 [2024-07-15 15:05:02.935502] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935528] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.141 [2024-07-15 15:05:02.935533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935538] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:47.141 [2024-07-15 15:05:02.935543] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:47.141 [2024-07-15 15:05:02.935547] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935555] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.141 [2024-07-15 15:05:02.935586] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.141 [2024-07-15 15:05:02.935590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935595] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935604] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.141 [2024-07-15 15:05:02.935632] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.141 [2024-07-15 15:05:02.935637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935643] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935651] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.141 [2024-07-15 15:05:02.935683] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.141 [2024-07-15 15:05:02.935688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935694] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935703] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.141 [2024-07-15 15:05:02.935738] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.141 [2024-07-15 15:05:02.935742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935748] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935757] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.141 [2024-07-15 15:05:02.935763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.141 [2024-07-15 15:05:02.935783] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.141 [2024-07-15 15:05:02.935788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:47.141 [2024-07-15 15:05:02.935793] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935802] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.935833] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.935838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.935843] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935852] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.935881] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.935885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.935891] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935899] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.935925] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.935929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.935935] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935943] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.935976] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.935980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.935985] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.935994] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936028] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936038] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936046] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936072] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936082] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936090] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936116] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936126] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936134] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936164] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936174] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936183] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936219] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936233] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936242] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936272] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936282] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936290] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936316] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936326] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936335] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936369] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936378] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936387] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936417] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936427] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936435] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936467] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936477] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936486] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936513] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936523] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936535] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936563] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936573] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936581] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936607] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936617] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936626] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936655] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.142 [2024-07-15 15:05:02.936660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:47.142 [2024-07-15 15:05:02.936665] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936674] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.142 [2024-07-15 15:05:02.936680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.142 [2024-07-15 15:05:02.936706] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.936710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.936715] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936724] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.936748] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.936752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.936757] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936766] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.936792] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.936796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.936801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936811] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.936837] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.936842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.936847] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936855] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.936885] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.936890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.936895] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936904] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.936927] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.936932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.936937] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936946] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.936972] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.936976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.936981] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936990] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.936996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.937020] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.937024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.937029] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937038] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.937070] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.937075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.937081] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937090] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.937117] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.937122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.937127] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937136] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.937170] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.937174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.937179] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937188] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.937214] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.937218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.937223] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937235] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.937269] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.937274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.937279] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937287] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.143 [2024-07-15 15:05:02.937313] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.143 [2024-07-15 15:05:02.937318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:47.143 [2024-07-15 15:05:02.937323] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181200 00:22:47.143 [2024-07-15 15:05:02.937331] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937357] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937369] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937377] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937405] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937415] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937423] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937455] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937465] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937474] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937506] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937515] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937524] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937554] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937564] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937572] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937600] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937610] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937618] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937644] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937655] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937664] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937692] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937702] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937710] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937738] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937748] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937756] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937784] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937794] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937802] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937828] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937838] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937846] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937876] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937886] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937895] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937926] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937936] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937944] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.937973] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.937977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:47.144 [2024-07-15 15:05:02.937982] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937991] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.144 [2024-07-15 15:05:02.937998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.144 [2024-07-15 15:05:02.938023] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.144 [2024-07-15 15:05:02.938027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:47.145 [2024-07-15 15:05:02.938033] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938041] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.145 [2024-07-15 15:05:02.938071] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.145 [2024-07-15 15:05:02.938076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:47.145 [2024-07-15 15:05:02.938081] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938091] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.145 [2024-07-15 15:05:02.938116] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.145 [2024-07-15 15:05:02.938120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:47.145 [2024-07-15 15:05:02.938125] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938134] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.145 [2024-07-15 15:05:02.938168] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.145 [2024-07-15 15:05:02.938172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:47.145 [2024-07-15 15:05:02.938178] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938186] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.938193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.145 [2024-07-15 15:05:02.938215] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.145 [2024-07-15 15:05:02.938220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:47.145 [2024-07-15 15:05:02.938225] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.942240] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.942248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.145 [2024-07-15 15:05:02.942269] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.145 [2024-07-15 15:05:02.942274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:22:47.145 [2024-07-15 15:05:02.942279] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:02.942285] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:47.145 128 00:22:47.145 Transport Service Identifier: 4420 00:22:47.145 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:47.145 Transport Address: 192.168.100.8 00:22:47.145 Transport Specific Address Subtype - RDMA 00:22:47.145 RDMA QP Service Type: 1 (Reliable Connected) 00:22:47.145 RDMA Provider Type: 1 (No provider specified) 00:22:47.145 RDMA CM Service: 1 (RDMA_CM) 00:22:47.145 15:05:03 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:47.145 [2024-07-15 15:05:03.028432] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:47.145 [2024-07-15 15:05:03.028480] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912152 ] 00:22:47.145 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.145 [2024-07-15 15:05:03.080886] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:47.145 [2024-07-15 15:05:03.080964] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:47.145 [2024-07-15 15:05:03.080979] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:47.145 [2024-07-15 15:05:03.080983] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:47.145 [2024-07-15 15:05:03.081007] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:47.145 [2024-07-15 15:05:03.093901] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:47.145 [2024-07-15 15:05:03.111487] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:47.145 [2024-07-15 15:05:03.111496] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:47.145 [2024-07-15 15:05:03.111504] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111510] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111515] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111523] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111529] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111534] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111538] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111543] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111548] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111553] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111558] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111563] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111568] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111573] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111578] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111583] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111588] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111593] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111598] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111603] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111608] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111613] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111618] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111623] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111628] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181200 00:22:47.145 [2024-07-15 15:05:03.111633] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.111637] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.111643] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.111648] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.111652] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.111657] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.111662] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:47.146 [2024-07-15 15:05:03.111666] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:47.146 [2024-07-15 15:05:03.111669] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:47.146 [2024-07-15 15:05:03.111685] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.111697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181200 00:22:47.146 [2024-07-15 15:05:03.118237] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118252] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118259] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:47.146 [2024-07-15 15:05:03.118265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:47.146 [2024-07-15 15:05:03.118270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:47.146 [2024-07-15 15:05:03.118282] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.146 [2024-07-15 15:05:03.118305] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:47.146 [2024-07-15 15:05:03.118320] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:47.146 [2024-07-15 15:05:03.118332] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.146 [2024-07-15 15:05:03.118358] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118368] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:47.146 [2024-07-15 15:05:03.118373] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:47.146 [2024-07-15 15:05:03.118386] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.146 [2024-07-15 15:05:03.118407] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:47.146 [2024-07-15 15:05:03.118422] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118430] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.146 [2024-07-15 15:05:03.118455] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118465] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:47.146 [2024-07-15 15:05:03.118469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:47.146 [2024-07-15 15:05:03.118474] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:47.146 [2024-07-15 15:05:03.118585] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:47.146 [2024-07-15 15:05:03.118589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:47.146 [2024-07-15 15:05:03.118597] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.146 [2024-07-15 15:05:03.118620] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:47.146 [2024-07-15 15:05:03.118635] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118643] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.146 [2024-07-15 15:05:03.118666] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118676] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:47.146 [2024-07-15 15:05:03.118680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:47.146 [2024-07-15 15:05:03.118685] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:47.146 [2024-07-15 15:05:03.118699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:47.146 [2024-07-15 15:05:03.118708] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181200 00:22:47.146 [2024-07-15 15:05:03.118742] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.146 [2024-07-15 15:05:03.118747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:47.146 [2024-07-15 15:05:03.118756] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:47.146 [2024-07-15 15:05:03.118760] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:47.146 [2024-07-15 15:05:03.118765] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:47.146 [2024-07-15 15:05:03.118769] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:47.146 [2024-07-15 15:05:03.118773] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:47.146 [2024-07-15 15:05:03.118778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:47.146 [2024-07-15 15:05:03.118783] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:47.146 [2024-07-15 15:05:03.118789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:47.146 [2024-07-15 15:05:03.118796] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.147 [2024-07-15 15:05:03.118822] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.118826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.118834] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.147 [2024-07-15 15:05:03.118846] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.147 [2024-07-15 15:05:03.118858] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.147 [2024-07-15 15:05:03.118869] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.147 [2024-07-15 15:05:03.118879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.118884] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.118900] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.147 [2024-07-15 15:05:03.118928] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.118933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.118938] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:47.147 [2024-07-15 15:05:03.118946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.118951] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.118963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.118969] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.118975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.147 [2024-07-15 15:05:03.118991] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.118995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119061] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119077] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181200 00:22:47.147 [2024-07-15 15:05:03.119103] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119117] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:47.147 [2024-07-15 15:05:03.119128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119133] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119148] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181200 00:22:47.147 [2024-07-15 15:05:03.119180] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119199] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119215] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181200 00:22:47.147 [2024-07-15 15:05:03.119245] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119262] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119295] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:47.147 [2024-07-15 15:05:03.119300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:47.147 [2024-07-15 15:05:03.119305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:47.147 [2024-07-15 15:05:03.119318] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.147 [2024-07-15 15:05:03.119332] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.147 [2024-07-15 15:05:03.119347] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119356] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119361] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119371] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119379] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.147 [2024-07-15 15:05:03.119399] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119410] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119418] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.147 [2024-07-15 15:05:03.119443] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119453] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119461] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.147 [2024-07-15 15:05:03.119484] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.147 [2024-07-15 15:05:03.119488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:22:47.147 [2024-07-15 15:05:03.119493] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119505] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x181200 00:22:47.147 [2024-07-15 15:05:03.119521] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181200 00:22:47.147 [2024-07-15 15:05:03.119527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x181200 00:22:47.148 [2024-07-15 15:05:03.119535] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181200 00:22:47.148 [2024-07-15 15:05:03.119541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x181200 00:22:47.148 [2024-07-15 15:05:03.119551] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181200 00:22:47.148 [2024-07-15 15:05:03.119557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x181200 00:22:47.148 [2024-07-15 15:05:03.119564] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.148 [2024-07-15 15:05:03.119569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:47.148 [2024-07-15 15:05:03.119580] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181200 00:22:47.148 [2024-07-15 15:05:03.119586] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.148 [2024-07-15 15:05:03.119590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:47.148 [2024-07-15 15:05:03.119599] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181200 00:22:47.148 [2024-07-15 15:05:03.119604] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.148 [2024-07-15 15:05:03.119609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:47.148 [2024-07-15 15:05:03.119616] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181200 00:22:47.148 [2024-07-15 15:05:03.119621] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.148 [2024-07-15 15:05:03.119625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:47.148 [2024-07-15 15:05:03.119633] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181200 00:22:47.148 ===================================================== 00:22:47.148 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.148 ===================================================== 00:22:47.148 Controller Capabilities/Features 00:22:47.148 ================================ 00:22:47.148 Vendor ID: 8086 00:22:47.148 Subsystem Vendor ID: 8086 00:22:47.148 Serial Number: SPDK00000000000001 00:22:47.148 Model Number: SPDK bdev Controller 00:22:47.148 Firmware Version: 24.09 00:22:47.148 Recommended Arb Burst: 6 00:22:47.148 IEEE OUI Identifier: e4 d2 5c 00:22:47.148 Multi-path I/O 00:22:47.148 May have multiple subsystem ports: Yes 00:22:47.148 May have multiple controllers: Yes 00:22:47.148 Associated with SR-IOV VF: No 00:22:47.148 Max Data Transfer Size: 131072 00:22:47.148 Max Number of Namespaces: 32 00:22:47.148 Max Number of I/O Queues: 127 00:22:47.148 NVMe Specification Version (VS): 1.3 00:22:47.148 NVMe Specification Version (Identify): 1.3 00:22:47.148 Maximum Queue Entries: 128 00:22:47.148 Contiguous Queues Required: Yes 00:22:47.148 Arbitration Mechanisms Supported 00:22:47.148 Weighted Round Robin: Not Supported 00:22:47.148 Vendor Specific: Not Supported 00:22:47.148 Reset Timeout: 15000 ms 00:22:47.148 Doorbell Stride: 4 bytes 00:22:47.148 NVM Subsystem Reset: Not Supported 00:22:47.148 Command Sets Supported 00:22:47.148 NVM Command Set: Supported 00:22:47.148 Boot Partition: Not Supported 00:22:47.148 Memory Page Size Minimum: 4096 bytes 00:22:47.148 Memory Page Size Maximum: 4096 bytes 00:22:47.148 Persistent Memory Region: Not Supported 00:22:47.148 Optional Asynchronous Events Supported 00:22:47.148 Namespace Attribute Notices: Supported 00:22:47.148 Firmware Activation Notices: Not Supported 00:22:47.148 ANA Change Notices: Not Supported 00:22:47.148 PLE Aggregate Log Change Notices: Not Supported 00:22:47.148 LBA Status Info Alert Notices: Not Supported 00:22:47.148 EGE Aggregate Log Change Notices: Not Supported 00:22:47.148 Normal NVM Subsystem Shutdown event: Not Supported 00:22:47.148 Zone Descriptor Change Notices: Not Supported 00:22:47.148 Discovery Log Change Notices: Not Supported 00:22:47.148 Controller Attributes 00:22:47.148 128-bit Host Identifier: Supported 00:22:47.148 Non-Operational Permissive Mode: Not Supported 00:22:47.148 NVM Sets: Not Supported 00:22:47.148 Read Recovery Levels: Not Supported 00:22:47.148 Endurance Groups: Not Supported 00:22:47.148 Predictable Latency Mode: Not Supported 00:22:47.148 Traffic Based Keep ALive: Not Supported 00:22:47.148 Namespace Granularity: Not Supported 00:22:47.148 SQ Associations: Not Supported 00:22:47.148 UUID List: Not Supported 00:22:47.148 Multi-Domain Subsystem: Not Supported 00:22:47.148 Fixed Capacity Management: Not Supported 00:22:47.148 Variable Capacity Management: Not Supported 00:22:47.148 Delete Endurance Group: Not Supported 00:22:47.148 Delete NVM Set: Not Supported 00:22:47.148 Extended LBA Formats Supported: Not Supported 00:22:47.148 Flexible Data Placement Supported: Not Supported 00:22:47.148 00:22:47.148 Controller Memory Buffer Support 00:22:47.148 ================================ 00:22:47.148 Supported: No 00:22:47.148 00:22:47.148 Persistent Memory Region Support 00:22:47.148 ================================ 00:22:47.148 Supported: No 00:22:47.148 00:22:47.148 Admin Command Set Attributes 00:22:47.148 ============================ 00:22:47.148 Security Send/Receive: Not Supported 00:22:47.148 Format NVM: Not Supported 00:22:47.148 Firmware Activate/Download: Not Supported 00:22:47.148 Namespace Management: Not Supported 00:22:47.148 Device Self-Test: Not Supported 00:22:47.148 Directives: Not Supported 00:22:47.148 NVMe-MI: Not Supported 00:22:47.148 Virtualization Management: Not Supported 00:22:47.148 Doorbell Buffer Config: Not Supported 00:22:47.148 Get LBA Status Capability: Not Supported 00:22:47.148 Command & Feature Lockdown Capability: Not Supported 00:22:47.148 Abort Command Limit: 4 00:22:47.148 Async Event Request Limit: 4 00:22:47.148 Number of Firmware Slots: N/A 00:22:47.148 Firmware Slot 1 Read-Only: N/A 00:22:47.148 Firmware Activation Without Reset: N/A 00:22:47.148 Multiple Update Detection Support: N/A 00:22:47.148 Firmware Update Granularity: No Information Provided 00:22:47.148 Per-Namespace SMART Log: No 00:22:47.148 Asymmetric Namespace Access Log Page: Not Supported 00:22:47.148 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:47.148 Command Effects Log Page: Supported 00:22:47.148 Get Log Page Extended Data: Supported 00:22:47.148 Telemetry Log Pages: Not Supported 00:22:47.148 Persistent Event Log Pages: Not Supported 00:22:47.148 Supported Log Pages Log Page: May Support 00:22:47.148 Commands Supported & Effects Log Page: Not Supported 00:22:47.148 Feature Identifiers & Effects Log Page:May Support 00:22:47.148 NVMe-MI Commands & Effects Log Page: May Support 00:22:47.148 Data Area 4 for Telemetry Log: Not Supported 00:22:47.148 Error Log Page Entries Supported: 128 00:22:47.148 Keep Alive: Supported 00:22:47.148 Keep Alive Granularity: 10000 ms 00:22:47.148 00:22:47.148 NVM Command Set Attributes 00:22:47.148 ========================== 00:22:47.148 Submission Queue Entry Size 00:22:47.148 Max: 64 00:22:47.148 Min: 64 00:22:47.148 Completion Queue Entry Size 00:22:47.148 Max: 16 00:22:47.148 Min: 16 00:22:47.148 Number of Namespaces: 32 00:22:47.148 Compare Command: Supported 00:22:47.148 Write Uncorrectable Command: Not Supported 00:22:47.148 Dataset Management Command: Supported 00:22:47.148 Write Zeroes Command: Supported 00:22:47.148 Set Features Save Field: Not Supported 00:22:47.148 Reservations: Supported 00:22:47.148 Timestamp: Not Supported 00:22:47.148 Copy: Supported 00:22:47.148 Volatile Write Cache: Present 00:22:47.148 Atomic Write Unit (Normal): 1 00:22:47.148 Atomic Write Unit (PFail): 1 00:22:47.148 Atomic Compare & Write Unit: 1 00:22:47.148 Fused Compare & Write: Supported 00:22:47.149 Scatter-Gather List 00:22:47.149 SGL Command Set: Supported 00:22:47.149 SGL Keyed: Supported 00:22:47.149 SGL Bit Bucket Descriptor: Not Supported 00:22:47.149 SGL Metadata Pointer: Not Supported 00:22:47.149 Oversized SGL: Not Supported 00:22:47.149 SGL Metadata Address: Not Supported 00:22:47.149 SGL Offset: Supported 00:22:47.149 Transport SGL Data Block: Not Supported 00:22:47.149 Replay Protected Memory Block: Not Supported 00:22:47.149 00:22:47.149 Firmware Slot Information 00:22:47.149 ========================= 00:22:47.149 Active slot: 1 00:22:47.149 Slot 1 Firmware Revision: 24.09 00:22:47.149 00:22:47.149 00:22:47.149 Commands Supported and Effects 00:22:47.149 ============================== 00:22:47.149 Admin Commands 00:22:47.149 -------------- 00:22:47.149 Get Log Page (02h): Supported 00:22:47.149 Identify (06h): Supported 00:22:47.149 Abort (08h): Supported 00:22:47.149 Set Features (09h): Supported 00:22:47.149 Get Features (0Ah): Supported 00:22:47.149 Asynchronous Event Request (0Ch): Supported 00:22:47.149 Keep Alive (18h): Supported 00:22:47.149 I/O Commands 00:22:47.149 ------------ 00:22:47.149 Flush (00h): Supported LBA-Change 00:22:47.149 Write (01h): Supported LBA-Change 00:22:47.149 Read (02h): Supported 00:22:47.149 Compare (05h): Supported 00:22:47.149 Write Zeroes (08h): Supported LBA-Change 00:22:47.149 Dataset Management (09h): Supported LBA-Change 00:22:47.149 Copy (19h): Supported LBA-Change 00:22:47.149 00:22:47.149 Error Log 00:22:47.149 ========= 00:22:47.149 00:22:47.149 Arbitration 00:22:47.149 =========== 00:22:47.149 Arbitration Burst: 1 00:22:47.149 00:22:47.149 Power Management 00:22:47.149 ================ 00:22:47.149 Number of Power States: 1 00:22:47.149 Current Power State: Power State #0 00:22:47.149 Power State #0: 00:22:47.149 Max Power: 0.00 W 00:22:47.149 Non-Operational State: Operational 00:22:47.149 Entry Latency: Not Reported 00:22:47.149 Exit Latency: Not Reported 00:22:47.149 Relative Read Throughput: 0 00:22:47.149 Relative Read Latency: 0 00:22:47.149 Relative Write Throughput: 0 00:22:47.149 Relative Write Latency: 0 00:22:47.149 Idle Power: Not Reported 00:22:47.149 Active Power: Not Reported 00:22:47.149 Non-Operational Permissive Mode: Not Supported 00:22:47.149 00:22:47.149 Health Information 00:22:47.149 ================== 00:22:47.149 Critical Warnings: 00:22:47.149 Available Spare Space: OK 00:22:47.149 Temperature: OK 00:22:47.149 Device Reliability: OK 00:22:47.149 Read Only: No 00:22:47.149 Volatile Memory Backup: OK 00:22:47.149 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:47.149 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:47.149 Available Spare: 0% 00:22:47.149 Available Spare Threshold: 0% 00:22:47.149 Life Percentage [2024-07-15 15:05:03.119724] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.119751] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.119755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119761] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119786] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:47.149 [2024-07-15 15:05:03.119794] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 5640 doesn't match qid 00:22:47.149 [2024-07-15 15:05:03.119808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32726 cdw0:5 sqhd:ead0 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119813] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 5640 doesn't match qid 00:22:47.149 [2024-07-15 15:05:03.119820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32726 cdw0:5 sqhd:ead0 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119825] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 5640 doesn't match qid 00:22:47.149 [2024-07-15 15:05:03.119831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32726 cdw0:5 sqhd:ead0 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119837] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 5640 doesn't match qid 00:22:47.149 [2024-07-15 15:05:03.119843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32726 cdw0:5 sqhd:ead0 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119851] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.119871] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.119876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119883] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.119895] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119910] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.119915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119920] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:47.149 [2024-07-15 15:05:03.119924] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:47.149 [2024-07-15 15:05:03.119929] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119939] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.119963] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.119968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.119973] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119982] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.119988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.120004] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.120008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.120014] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120023] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.120045] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.120050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.120055] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120064] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.120086] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.120091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.120096] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120104] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.120125] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.120130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.120135] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120144] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.120165] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.120170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.120177] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120185] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.120208] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.149 [2024-07-15 15:05:03.120212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:47.149 [2024-07-15 15:05:03.120217] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120226] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.149 [2024-07-15 15:05:03.120238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.149 [2024-07-15 15:05:03.120256] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120266] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120274] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120295] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120306] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120314] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120334] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120344] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120352] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120380] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120390] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120398] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120420] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120433] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120442] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120463] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120473] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120481] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120501] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120511] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120519] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120543] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120552] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120561] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120581] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120590] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120598] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120624] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120634] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120642] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120664] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120675] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120683] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120703] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120713] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120721] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120741] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120750] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120759] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120783] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120792] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120801] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120821] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120830] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120839] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120864] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120874] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120882] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120904] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120914] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120922] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120944] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120953] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120962] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.120968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.120983] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.120988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.120993] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.121001] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.121008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.150 [2024-07-15 15:05:03.121025] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.150 [2024-07-15 15:05:03.121030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:47.150 [2024-07-15 15:05:03.121035] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181200 00:22:47.150 [2024-07-15 15:05:03.121043] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121065] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121075] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121083] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121104] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121114] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121122] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121143] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121153] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121161] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121184] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121194] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121202] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121224] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121238] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121246] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121270] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121280] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121288] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121307] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121317] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121325] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121345] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121355] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121363] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121384] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121394] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121402] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121424] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121434] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121442] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121464] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121474] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121482] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121504] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121513] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121522] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121547] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121557] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121565] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121585] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121594] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121603] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121624] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121634] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121642] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121664] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121673] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121682] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121701] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121711] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121719] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121741] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121751] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121759] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.151 [2024-07-15 15:05:03.121779] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.151 [2024-07-15 15:05:03.121783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:47.151 [2024-07-15 15:05:03.121788] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121797] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.151 [2024-07-15 15:05:03.121803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.121818] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.121823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.121828] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121836] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.121859] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.121864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.121869] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121877] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.121899] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.121903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.121909] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121917] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.121937] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.121941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.121946] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121955] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.121974] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.121979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.121984] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121992] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.121999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.122012] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.122016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.122022] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122030] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.122052] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.122056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.122061] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122071] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.122091] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.122095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.122100] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122109] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.122132] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.122137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.122142] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122150] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.122172] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.122177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.122182] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122190] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.122197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.122214] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.122218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.122224] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.126238] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.126247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:47.152 [2024-07-15 15:05:03.126263] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:47.152 [2024-07-15 15:05:03.126267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0014 p:0 m:0 dnr:0 00:22:47.152 [2024-07-15 15:05:03.126273] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181200 00:22:47.152 [2024-07-15 15:05:03.126278] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:47.152 Used: 0% 00:22:47.152 Data Units Read: 0 00:22:47.152 Data Units Written: 0 00:22:47.152 Host Read Commands: 0 00:22:47.152 Host Write Commands: 0 00:22:47.152 Controller Busy Time: 0 minutes 00:22:47.152 Power Cycles: 0 00:22:47.152 Power On Hours: 0 hours 00:22:47.152 Unsafe Shutdowns: 0 00:22:47.152 Unrecoverable Media Errors: 0 00:22:47.152 Lifetime Error Log Entries: 0 00:22:47.152 Warning Temperature Time: 0 minutes 00:22:47.152 Critical Temperature Time: 0 minutes 00:22:47.152 00:22:47.152 Number of Queues 00:22:47.152 ================ 00:22:47.152 Number of I/O Submission Queues: 127 00:22:47.152 Number of I/O Completion Queues: 127 00:22:47.152 00:22:47.152 Active Namespaces 00:22:47.152 ================= 00:22:47.152 Namespace ID:1 00:22:47.152 Error Recovery Timeout: Unlimited 00:22:47.152 Command Set Identifier: NVM (00h) 00:22:47.152 Deallocate: Supported 00:22:47.152 Deallocated/Unwritten Error: Not Supported 00:22:47.152 Deallocated Read Value: Unknown 00:22:47.152 Deallocate in Write Zeroes: Not Supported 00:22:47.152 Deallocated Guard Field: 0xFFFF 00:22:47.152 Flush: Supported 00:22:47.152 Reservation: Supported 00:22:47.152 Namespace Sharing Capabilities: Multiple Controllers 00:22:47.152 Size (in LBAs): 131072 (0GiB) 00:22:47.152 Capacity (in LBAs): 131072 (0GiB) 00:22:47.152 Utilization (in LBAs): 131072 (0GiB) 00:22:47.152 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:47.152 EUI64: ABCDEF0123456789 00:22:47.152 UUID: 92fa6280-afa6-44d7-bd7a-305109e5a0f1 00:22:47.152 Thin Provisioning: Not Supported 00:22:47.152 Per-NS Atomic Units: Yes 00:22:47.152 Atomic Boundary Size (Normal): 0 00:22:47.152 Atomic Boundary Size (PFail): 0 00:22:47.152 Atomic Boundary Offset: 0 00:22:47.152 Maximum Single Source Range Length: 65535 00:22:47.152 Maximum Copy Length: 65535 00:22:47.152 Maximum Source Range Count: 1 00:22:47.152 NGUID/EUI64 Never Reused: No 00:22:47.152 Namespace Write Protected: No 00:22:47.152 Number of LBA Formats: 1 00:22:47.152 Current LBA Format: LBA Format #00 00:22:47.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:47.152 00:22:47.152 15:05:03 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:47.152 15:05:03 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.152 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.152 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:47.413 rmmod nvme_rdma 00:22:47.413 rmmod nvme_fabrics 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1911849 ']' 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1911849 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1911849 ']' 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1911849 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1911849 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1911849' 00:22:47.413 killing process with pid 1911849 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1911849 00:22:47.413 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1911849 00:22:47.674 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.674 15:05:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:47.674 00:22:47.674 real 0m9.970s 00:22:47.674 user 0m9.001s 00:22:47.674 sys 0m6.261s 00:22:47.674 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.674 15:05:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:47.674 ************************************ 00:22:47.674 END TEST nvmf_identify 00:22:47.674 ************************************ 00:22:47.674 15:05:03 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:47.674 15:05:03 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:47.674 15:05:03 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:47.674 15:05:03 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.674 15:05:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:47.674 ************************************ 00:22:47.674 START TEST nvmf_perf 00:22:47.674 ************************************ 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:47.674 * Looking for test storage... 00:22:47.674 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.674 15:05:03 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:55.825 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:55.825 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:55.825 Found net devices under 0000:98:00.0: mlx_0_0 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:55.825 Found net devices under 0000:98:00.1: mlx_0_1 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:55.825 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:55.826 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:55.826 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:55.826 altname enp152s0f0np0 00:22:55.826 altname ens817f0np0 00:22:55.826 inet 192.168.100.8/24 scope global mlx_0_0 00:22:55.826 valid_lft forever preferred_lft forever 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:55.826 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:55.826 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:55.826 altname enp152s0f1np1 00:22:55.826 altname ens817f1np1 00:22:55.826 inet 192.168.100.9/24 scope global mlx_0_1 00:22:55.826 valid_lft forever preferred_lft forever 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:55.826 192.168.100.9' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:55.826 192.168.100.9' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:55.826 192.168.100.9' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1916338 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1916338 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1916338 ']' 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:55.826 15:05:11 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:55.826 [2024-07-15 15:05:11.754425] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:55.826 [2024-07-15 15:05:11.754495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.826 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.826 [2024-07-15 15:05:11.825975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.107 [2024-07-15 15:05:11.901001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.107 [2024-07-15 15:05:11.901040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.107 [2024-07-15 15:05:11.901047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.107 [2024-07-15 15:05:11.901058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.107 [2024-07-15 15:05:11.901063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.107 [2024-07-15 15:05:11.901200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.107 [2024-07-15 15:05:11.901340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.107 [2024-07-15 15:05:11.901655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.107 [2024-07-15 15:05:11.901656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:56.677 15:05:12 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:57.247 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:57.247 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:57.247 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:22:57.247 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:57.507 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:57.507 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:22:57.507 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:57.507 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:22:57.507 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:22:57.507 [2024-07-15 15:05:13.561559] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:22:57.766 [2024-07-15 15:05:13.589303] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6c7220/0x7f5300) succeed. 00:22:57.766 [2024-07-15 15:05:13.602385] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6c8860/0x6d5180) succeed. 00:22:57.766 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.026 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:58.026 15:05:13 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.026 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:58.026 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:58.284 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:58.544 [2024-07-15 15:05:14.374613] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:58.544 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:58.544 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:22:58.544 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:22:58.544 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:58.544 15:05:14 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:22:59.947 Initializing NVMe Controllers 00:22:59.947 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:22:59.947 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:22:59.947 Initialization complete. Launching workers. 00:22:59.947 ======================================================== 00:22:59.947 Latency(us) 00:22:59.947 Device Information : IOPS MiB/s Average min max 00:22:59.947 PCIE (0000:65:00.0) NSID 1 from core 0: 79386.52 310.10 402.49 55.82 4377.55 00:22:59.947 ======================================================== 00:22:59.947 Total : 79386.52 310.10 402.49 55.82 4377.55 00:22:59.947 00:22:59.947 15:05:15 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:59.947 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.242 Initializing NVMe Controllers 00:23:03.242 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.242 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.242 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:03.242 Initialization complete. Launching workers. 00:23:03.242 ======================================================== 00:23:03.242 Latency(us) 00:23:03.242 Device Information : IOPS MiB/s Average min max 00:23:03.242 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9732.99 38.02 102.47 37.40 4062.06 00:23:03.242 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7222.99 28.21 137.62 54.98 4100.48 00:23:03.242 ======================================================== 00:23:03.242 Total : 16955.98 66.23 117.45 37.40 4100.48 00:23:03.242 00:23:03.242 15:05:19 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:03.242 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.445 Initializing NVMe Controllers 00:23:07.445 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.445 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.445 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.445 Initialization complete. Launching workers. 00:23:07.445 ======================================================== 00:23:07.445 Latency(us) 00:23:07.445 Device Information : IOPS MiB/s Average min max 00:23:07.445 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20357.00 79.52 1571.81 378.13 5317.46 00:23:07.445 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.43 5058.32 10093.70 00:23:07.445 ======================================================== 00:23:07.445 Total : 24389.00 95.27 2629.79 378.13 10093.70 00:23:07.445 00:23:07.445 15:05:22 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:23:07.445 15:05:22 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:07.445 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.648 Initializing NVMe Controllers 00:23:11.648 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.648 Controller IO queue size 128, less than required. 00:23:11.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.648 Controller IO queue size 128, less than required. 00:23:11.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.648 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.648 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:11.648 Initialization complete. Launching workers. 00:23:11.648 ======================================================== 00:23:11.648 Latency(us) 00:23:11.648 Device Information : IOPS MiB/s Average min max 00:23:11.648 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5037.10 1259.27 25419.36 10071.82 58340.29 00:23:11.648 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5099.09 1274.77 24971.97 10338.80 43352.50 00:23:11.648 ======================================================== 00:23:11.648 Total : 10136.19 2534.05 25194.30 10071.82 58340.29 00:23:11.648 00:23:11.648 15:05:27 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:23:11.648 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.648 No valid NVMe controllers or AIO or URING devices found 00:23:11.648 Initializing NVMe Controllers 00:23:11.648 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.648 Controller IO queue size 128, less than required. 00:23:11.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.648 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:11.648 Controller IO queue size 128, less than required. 00:23:11.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.648 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:11.648 WARNING: Some requested NVMe devices were skipped 00:23:11.648 15:05:27 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:23:11.648 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.852 Initializing NVMe Controllers 00:23:15.852 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.852 Controller IO queue size 128, less than required. 00:23:15.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.852 Controller IO queue size 128, less than required. 00:23:15.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:15.852 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:15.852 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:15.852 Initialization complete. Launching workers. 00:23:15.852 00:23:15.852 ==================== 00:23:15.852 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:15.852 RDMA transport: 00:23:15.853 dev name: mlx5_0 00:23:15.853 polls: 269266 00:23:15.853 idle_polls: 264992 00:23:15.853 completions: 54374 00:23:15.853 queued_requests: 1 00:23:15.853 total_send_wrs: 27187 00:23:15.853 send_doorbell_updates: 3818 00:23:15.853 total_recv_wrs: 27314 00:23:15.853 recv_doorbell_updates: 3839 00:23:15.853 --------------------------------- 00:23:15.853 00:23:15.853 ==================== 00:23:15.853 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:15.853 RDMA transport: 00:23:15.853 dev name: mlx5_0 00:23:15.853 polls: 274649 00:23:15.853 idle_polls: 274392 00:23:15.853 completions: 17902 00:23:15.853 queued_requests: 1 00:23:15.853 total_send_wrs: 8951 00:23:15.853 send_doorbell_updates: 251 00:23:15.853 total_recv_wrs: 9078 00:23:15.853 recv_doorbell_updates: 253 00:23:15.853 --------------------------------- 00:23:15.853 ======================================================== 00:23:15.853 Latency(us) 00:23:15.853 Device Information : IOPS MiB/s Average min max 00:23:15.853 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6784.29 1696.07 18861.19 8366.59 47202.19 00:23:15.853 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2233.48 558.37 57612.56 29996.88 89436.93 00:23:15.853 ======================================================== 00:23:15.853 Total : 9017.77 2254.44 28458.95 8366.59 89436.93 00:23:15.853 00:23:16.113 15:05:31 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:16.113 15:05:31 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:16.113 rmmod nvme_rdma 00:23:16.113 rmmod nvme_fabrics 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1916338 ']' 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1916338 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1916338 ']' 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1916338 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:16.113 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.374 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1916338 00:23:16.374 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:16.374 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:16.374 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1916338' 00:23:16.374 killing process with pid 1916338 00:23:16.374 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1916338 00:23:16.374 15:05:32 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1916338 00:23:18.286 15:05:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.286 15:05:34 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:18.286 00:23:18.286 real 0m30.658s 00:23:18.286 user 1m32.673s 00:23:18.286 sys 0m6.913s 00:23:18.286 15:05:34 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.286 15:05:34 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:18.286 ************************************ 00:23:18.286 END TEST nvmf_perf 00:23:18.286 ************************************ 00:23:18.286 15:05:34 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:18.286 15:05:34 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:18.286 15:05:34 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:18.286 15:05:34 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.286 15:05:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:18.286 ************************************ 00:23:18.286 START TEST nvmf_fio_host 00:23:18.286 ************************************ 00:23:18.286 15:05:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:18.548 * Looking for test storage... 00:23:18.548 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.548 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.549 15:05:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:23:26.688 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:23:26.688 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:23:26.688 Found net devices under 0000:98:00.0: mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:23:26.688 Found net devices under 0000:98:00.1: mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:26.688 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:26.688 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:23:26.688 altname enp152s0f0np0 00:23:26.688 altname ens817f0np0 00:23:26.688 inet 192.168.100.8/24 scope global mlx_0_0 00:23:26.688 valid_lft forever preferred_lft forever 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:26.688 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:26.688 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:23:26.688 altname enp152s0f1np1 00:23:26.688 altname ens817f1np1 00:23:26.688 inet 192.168.100.9/24 scope global mlx_0_1 00:23:26.688 valid_lft forever preferred_lft forever 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:26.688 192.168.100.9' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:26.688 192.168.100.9' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:26.688 192.168.100.9' 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:23:26.688 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1924966 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1924966 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1924966 ']' 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.689 15:05:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.689 [2024-07-15 15:05:42.001189] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:26.689 [2024-07-15 15:05:42.001281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.689 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.689 [2024-07-15 15:05:42.073470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.689 [2024-07-15 15:05:42.147725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.689 [2024-07-15 15:05:42.147765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.689 [2024-07-15 15:05:42.147772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.689 [2024-07-15 15:05:42.147779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.689 [2024-07-15 15:05:42.147785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.689 [2024-07-15 15:05:42.147927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.689 [2024-07-15 15:05:42.148054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.689 [2024-07-15 15:05:42.148213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.689 [2024-07-15 15:05:42.148214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.949 15:05:42 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.949 15:05:42 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:26.949 15:05:42 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:26.949 [2024-07-15 15:05:42.961750] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15ac200/0x15b06f0) succeed. 00:23:26.949 [2024-07-15 15:05:42.976371] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15ad840/0x15f1d80) succeed. 00:23:27.209 15:05:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:27.209 15:05:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.209 15:05:43 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.209 15:05:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:27.469 Malloc1 00:23:27.469 15:05:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.469 15:05:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:27.728 15:05:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:27.988 [2024-07-15 15:05:43.822741] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:27.988 15:05:43 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:27.988 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:28.270 15:05:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:28.534 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:28.534 fio-3.35 00:23:28.534 Starting 1 thread 00:23:28.534 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.077 00:23:31.077 test: (groupid=0, jobs=1): err= 0: pid=1925507: Mon Jul 15 15:05:46 2024 00:23:31.077 read: IOPS=20.6k, BW=80.4MiB/s (84.3MB/s)(161MiB/2003msec) 00:23:31.077 slat (nsec): min=2047, max=40503, avg=2118.86, stdev=499.61 00:23:31.077 clat (usec): min=2233, max=5715, avg=3097.66, stdev=89.35 00:23:31.077 lat (usec): min=2269, max=5717, avg=3099.78, stdev=89.38 00:23:31.077 clat percentiles (usec): 00:23:31.077 | 1.00th=[ 2835], 5.00th=[ 3064], 10.00th=[ 3064], 20.00th=[ 3064], 00:23:31.077 | 30.00th=[ 3097], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3097], 00:23:31.077 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3130], 95.00th=[ 3130], 00:23:31.077 | 99.00th=[ 3359], 99.50th=[ 3359], 99.90th=[ 4178], 99.95th=[ 4817], 00:23:31.077 | 99.99th=[ 5669] 00:23:31.077 bw ( KiB/s): min=80648, max=83216, per=99.99%, avg=82296.00, stdev=1137.41, samples=4 00:23:31.077 iops : min=20164, max=20804, avg=20574.00, stdev=283.27, samples=4 00:23:31.077 write: IOPS=20.5k, BW=80.1MiB/s (84.0MB/s)(160MiB/2003msec); 0 zone resets 00:23:31.077 slat (nsec): min=2113, max=22302, avg=2222.75, stdev=511.66 00:23:31.077 clat (usec): min=2285, max=5724, avg=3095.68, stdev=88.32 00:23:31.077 lat (usec): min=2297, max=5726, avg=3097.91, stdev=88.37 00:23:31.077 clat percentiles (usec): 00:23:31.077 | 1.00th=[ 2835], 5.00th=[ 3064], 10.00th=[ 3064], 20.00th=[ 3064], 00:23:31.077 | 30.00th=[ 3097], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3097], 00:23:31.077 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3130], 95.00th=[ 3130], 00:23:31.077 | 99.00th=[ 3326], 99.50th=[ 3359], 99.90th=[ 4113], 99.95th=[ 4883], 00:23:31.077 | 99.99th=[ 5669] 00:23:31.077 bw ( KiB/s): min=80496, max=82864, per=100.00%, avg=82052.00, stdev=1101.51, samples=4 00:23:31.077 iops : min=20124, max=20716, avg=20513.00, stdev=275.38, samples=4 00:23:31.077 lat (msec) : 4=99.87%, 10=0.13% 00:23:31.077 cpu : usr=99.60%, sys=0.00%, ctx=15, majf=0, minf=5 00:23:31.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:31.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:31.077 issued rwts: total=41214,41087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:31.077 00:23:31.077 Run status group 0 (all jobs): 00:23:31.077 READ: bw=80.4MiB/s (84.3MB/s), 80.4MiB/s-80.4MiB/s (84.3MB/s-84.3MB/s), io=161MiB (169MB), run=2003-2003msec 00:23:31.077 WRITE: bw=80.1MiB/s (84.0MB/s), 80.1MiB/s-80.1MiB/s (84.0MB/s-84.0MB/s), io=160MiB (168MB), run=2003-2003msec 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:31.077 15:05:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:31.337 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:31.337 fio-3.35 00:23:31.337 Starting 1 thread 00:23:31.337 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.884 00:23:33.884 test: (groupid=0, jobs=1): err= 0: pid=1926323: Mon Jul 15 15:05:49 2024 00:23:33.884 read: IOPS=13.9k, BW=217MiB/s (227MB/s)(425MiB/1961msec) 00:23:33.884 slat (nsec): min=3390, max=55760, avg=3646.67, stdev=1188.78 00:23:33.884 clat (usec): min=338, max=10649, avg=3404.67, stdev=1890.57 00:23:33.884 lat (usec): min=341, max=10672, avg=3408.31, stdev=1890.79 00:23:33.884 clat percentiles (usec): 00:23:33.884 | 1.00th=[ 930], 5.00th=[ 1123], 10.00th=[ 1270], 20.00th=[ 1565], 00:23:33.884 | 30.00th=[ 1926], 40.00th=[ 2376], 50.00th=[ 3064], 60.00th=[ 3720], 00:23:33.884 | 70.00th=[ 4424], 80.00th=[ 5145], 90.00th=[ 6128], 95.00th=[ 6849], 00:23:33.884 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[ 9634], 99.95th=[ 9896], 00:23:33.884 | 99.99th=[10683] 00:23:33.884 bw ( KiB/s): min=102208, max=111776, per=48.14%, avg=106773.33, stdev=4798.97, samples=3 00:23:33.884 iops : min= 6388, max= 6986, avg=6673.33, stdev=299.94, samples=3 00:23:33.884 write: IOPS=7559, BW=118MiB/s (124MB/s)(219MiB/1856msec); 0 zone resets 00:23:33.884 slat (usec): min=39, max=141, avg=41.03, stdev= 6.89 00:23:33.884 clat (usec): min=520, max=23033, avg=9693.14, stdev=5337.24 00:23:33.884 lat (usec): min=560, max=23073, avg=9734.16, stdev=5337.37 00:23:33.884 clat percentiles (usec): 00:23:33.884 | 1.00th=[ 2180], 5.00th=[ 2900], 10.00th=[ 3294], 20.00th=[ 4228], 00:23:33.884 | 30.00th=[ 5407], 40.00th=[ 6718], 50.00th=[ 8160], 60.00th=[11994], 00:23:33.884 | 70.00th=[14222], 80.00th=[15401], 90.00th=[16909], 95.00th=[18220], 00:23:33.884 | 99.00th=[19792], 99.50th=[20055], 99.90th=[22152], 99.95th=[22414], 00:23:33.884 | 99.99th=[22938] 00:23:33.884 bw ( KiB/s): min=108512, max=113152, per=91.11%, avg=110208.00, stdev=2559.40, samples=3 00:23:33.884 iops : min= 6782, max= 7072, avg=6888.00, stdev=159.96, samples=3 00:23:33.884 lat (usec) : 500=0.03%, 750=0.07%, 1000=1.30% 00:23:33.884 lat (msec) : 2=19.72%, 4=27.14%, 10=37.19%, 20=14.32%, 50=0.23% 00:23:33.884 cpu : usr=97.00%, sys=0.65%, ctx=183, majf=0, minf=8 00:23:33.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:33.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:33.884 issued rwts: total=27186,14031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:33.884 00:23:33.884 Run status group 0 (all jobs): 00:23:33.884 READ: bw=217MiB/s (227MB/s), 217MiB/s-217MiB/s (227MB/s-227MB/s), io=425MiB (445MB), run=1961-1961msec 00:23:33.884 WRITE: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=219MiB (230MB), run=1856-1856msec 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:33.884 rmmod nvme_rdma 00:23:33.884 rmmod nvme_fabrics 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1924966 ']' 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1924966 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1924966 ']' 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1924966 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1924966 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1924966' 00:23:33.884 killing process with pid 1924966 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1924966 00:23:33.884 15:05:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1924966 00:23:34.146 15:05:50 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:34.146 15:05:50 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:34.146 00:23:34.146 real 0m15.715s 00:23:34.146 user 1m7.248s 00:23:34.146 sys 0m6.559s 00:23:34.146 15:05:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:34.146 15:05:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.146 ************************************ 00:23:34.146 END TEST nvmf_fio_host 00:23:34.146 ************************************ 00:23:34.146 15:05:50 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:34.146 15:05:50 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:23:34.146 15:05:50 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:34.146 15:05:50 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:34.146 15:05:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:34.146 ************************************ 00:23:34.146 START TEST nvmf_failover 00:23:34.146 ************************************ 00:23:34.146 15:05:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:23:34.408 * Looking for test storage... 00:23:34.408 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.408 15:05:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:23:42.566 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:23:42.566 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:23:42.566 Found net devices under 0000:98:00.0: mlx_0_0 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:23:42.566 Found net devices under 0000:98:00.1: mlx_0_1 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:42.566 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:42.567 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:42.567 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:23:42.567 altname enp152s0f0np0 00:23:42.567 altname ens817f0np0 00:23:42.567 inet 192.168.100.8/24 scope global mlx_0_0 00:23:42.567 valid_lft forever preferred_lft forever 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:42.567 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:42.567 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:23:42.567 altname enp152s0f1np1 00:23:42.567 altname ens817f1np1 00:23:42.567 inet 192.168.100.9/24 scope global mlx_0_1 00:23:42.567 valid_lft forever preferred_lft forever 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:42.567 192.168.100.9' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:42.567 192.168.100.9' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:42.567 192.168.100.9' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1930998 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1930998 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1930998 ']' 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.567 15:05:58 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.567 [2024-07-15 15:05:58.400046] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:42.567 [2024-07-15 15:05:58.400117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.567 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.567 [2024-07-15 15:05:58.489868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:42.567 [2024-07-15 15:05:58.583172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.567 [2024-07-15 15:05:58.583242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.567 [2024-07-15 15:05:58.583252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.567 [2024-07-15 15:05:58.583260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.567 [2024-07-15 15:05:58.583266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.567 [2024-07-15 15:05:58.583394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.567 [2024-07-15 15:05:58.583689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.567 [2024-07-15 15:05:58.583690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.136 15:05:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.136 15:05:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:43.136 15:05:59 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.136 15:05:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.136 15:05:59 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:43.396 15:05:59 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.396 15:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:43.396 [2024-07-15 15:05:59.405098] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb7c920/0xb80e10) succeed. 00:23:43.396 [2024-07-15 15:05:59.418679] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb7dec0/0xbc24a0) succeed. 00:23:43.655 15:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:43.655 Malloc0 00:23:43.915 15:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.915 15:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.175 15:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:44.175 [2024-07-15 15:06:00.199988] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:44.175 15:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:44.435 [2024-07-15 15:06:00.368154] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:44.435 15:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:44.710 [2024-07-15 15:06:00.528687] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1931400 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1931400 /var/tmp/bdevperf.sock 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1931400 ']' 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.710 15:06:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:45.378 15:06:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.378 15:06:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:45.378 15:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:45.639 NVMe0n1 00:23:45.639 15:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:45.899 00:23:45.899 15:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1931810 00:23:45.899 15:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:45.899 15:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:46.840 15:06:02 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:47.101 15:06:03 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:50.394 15:06:06 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:50.394 00:23:50.394 15:06:06 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:50.654 15:06:06 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:53.952 15:06:09 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:53.952 [2024-07-15 15:06:09.614184] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:53.952 15:06:09 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:54.895 15:06:10 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:54.895 15:06:10 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 1931810 00:24:01.480 0 00:24:01.480 15:06:16 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 1931400 00:24:01.480 15:06:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1931400 ']' 00:24:01.480 15:06:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1931400 00:24:01.480 15:06:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:01.480 15:06:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.480 15:06:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1931400 00:24:01.480 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:01.480 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:01.480 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1931400' 00:24:01.480 killing process with pid 1931400 00:24:01.480 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1931400 00:24:01.480 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1931400 00:24:01.480 15:06:17 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.480 [2024-07-15 15:06:00.587976] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:01.480 [2024-07-15 15:06:00.588033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931400 ] 00:24:01.480 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.480 [2024-07-15 15:06:00.655564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.480 [2024-07-15 15:06:00.720030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.480 Running I/O for 15 seconds... 00:24:01.480 [2024-07-15 15:06:04.006205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:24:01.480 [2024-07-15 15:06:04.006412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.480 [2024-07-15 15:06:04.006421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.006984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.006991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.007000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.007007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.007016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.007023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.007032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.007039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.007048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183f00 00:24:01.481 [2024-07-15 15:06:04.007055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.481 [2024-07-15 15:06:04.007064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183f00 00:24:01.482 [2024-07-15 15:06:04.007256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.482 [2024-07-15 15:06:04.007726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.482 [2024-07-15 15:06:04.007733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.007984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.007994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.008315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.483 [2024-07-15 15:06:04.008321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.010590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.483 [2024-07-15 15:06:04.010603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.483 [2024-07-15 15:06:04.010609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12816 len:8 PRP1 0x0 PRP2 0x0 00:24:01.483 [2024-07-15 15:06:04.010620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:04.010653] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:01.483 [2024-07-15 15:06:04.010664] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:01.483 [2024-07-15 15:06:04.010678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.483 [2024-07-15 15:06:04.014266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.483 [2024-07-15 15:06:04.034209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.483 [2024-07-15 15:06:04.095498] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.483 [2024-07-15 15:06:07.443412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183f00 00:24:01.483 [2024-07-15 15:06:07.443452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.483 [2024-07-15 15:06:07.443469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:24:01.484 [2024-07-15 15:06:07.443781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.443984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.484 [2024-07-15 15:06:07.443991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.484 [2024-07-15 15:06:07.444000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.485 [2024-07-15 15:06:07.444165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183f00 00:24:01.485 [2024-07-15 15:06:07.444640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.485 [2024-07-15 15:06:07.444649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.444848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.444985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.444992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.486 [2024-07-15 15:06:07.445231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.445247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.445263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.445279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183f00 00:24:01.486 [2024-07-15 15:06:07.445295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.486 [2024-07-15 15:06:07.445304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:07.445311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:07.445327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:07.445344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:07.445360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:07.445376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:07.445391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:07.445407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:07.445423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:07.445440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:07.445456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:07.445471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:07.445487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.445496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:07.445503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.447827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.487 [2024-07-15 15:06:07.447839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.487 [2024-07-15 15:06:07.447846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73000 len:8 PRP1 0x0 PRP2 0x0 00:24:01.487 [2024-07-15 15:06:07.447856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:07.447890] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:01.487 [2024-07-15 15:06:07.447899] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:24:01.487 [2024-07-15 15:06:07.447907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.487 [2024-07-15 15:06:07.451474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.487 [2024-07-15 15:06:07.471284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.487 [2024-07-15 15:06:07.536210] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.487 [2024-07-15 15:06:11.782142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:11.782177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183f00 00:24:01.487 [2024-07-15 15:06:11.782205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.487 [2024-07-15 15:06:11.782579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.487 [2024-07-15 15:06:11.782586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.488 [2024-07-15 15:06:11.782954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.782988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.782997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183f00 00:24:01.488 [2024-07-15 15:06:11.783150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.488 [2024-07-15 15:06:11.783159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.489 [2024-07-15 15:06:11.783734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.489 [2024-07-15 15:06:11.783775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183f00 00:24:01.489 [2024-07-15 15:06:11.783782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183f00 00:24:01.490 [2024-07-15 15:06:11.783798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183f00 00:24:01.490 [2024-07-15 15:06:11.783814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183f00 00:24:01.490 [2024-07-15 15:06:11.783831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183f00 00:24:01.490 [2024-07-15 15:06:11.783847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183f00 00:24:01.490 [2024-07-15 15:06:11.783863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.783984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.783991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183f00 00:24:01.490 [2024-07-15 15:06:11.784150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.490 [2024-07-15 15:06:11.784232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb0fa000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.784337] rdma_provider_verbs.c: 86:spdk_rdma_provider_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:24:01.490 [2024-07-15 15:06:11.786622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.490 [2024-07-15 15:06:11.786632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.490 [2024-07-15 15:06:11.786639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127640 len:8 PRP1 0x0 PRP2 0x0 00:24:01.490 [2024-07-15 15:06:11.786649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.490 [2024-07-15 15:06:11.786679] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:01.490 [2024-07-15 15:06:11.786688] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:24:01.490 [2024-07-15 15:06:11.786696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.490 [2024-07-15 15:06:11.790242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.490 [2024-07-15 15:06:11.811715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.490 [2024-07-15 15:06:11.867514] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.490 00:24:01.490 Latency(us) 00:24:01.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.490 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:01.490 Verification LBA range: start 0x0 length 0x4000 00:24:01.490 NVMe0n1 : 15.01 13187.20 51.51 312.21 0.00 9454.17 339.63 1020613.97 00:24:01.490 =================================================================================================================== 00:24:01.490 Total : 13187.20 51.51 312.21 0.00 9454.17 339.63 1020613.97 00:24:01.490 Received shutdown signal, test time was about 15.000000 seconds 00:24:01.490 00:24:01.490 Latency(us) 00:24:01.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.490 =================================================================================================================== 00:24:01.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1935268 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1935268 /var/tmp/bdevperf.sock 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1935268 ']' 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.490 15:06:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:02.062 15:06:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.062 15:06:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:02.062 15:06:18 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:02.335 [2024-07-15 15:06:18.174345] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:02.335 15:06:18 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:02.335 [2024-07-15 15:06:18.334826] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:02.335 15:06:18 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.595 NVMe0n1 00:24:02.595 15:06:18 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.856 00:24:02.856 15:06:18 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.117 00:24:03.117 15:06:19 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:03.117 15:06:19 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.377 15:06:19 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.377 15:06:19 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:06.674 15:06:22 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:06.674 15:06:22 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:06.674 15:06:22 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.675 15:06:22 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1936288 00:24:06.675 15:06:22 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 1936288 00:24:07.626 0 00:24:07.626 15:06:23 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:07.626 [2024-07-15 15:06:17.256021] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:07.626 [2024-07-15 15:06:17.256081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1935268 ] 00:24:07.626 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.626 [2024-07-15 15:06:17.322433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.626 [2024-07-15 15:06:17.385443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.626 [2024-07-15 15:06:19.380504] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:07.626 [2024-07-15 15:06:19.381288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.626 [2024-07-15 15:06:19.381335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.626 [2024-07-15 15:06:19.408416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.626 [2024-07-15 15:06:19.432549] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:07.626 Running I/O for 1 seconds... 00:24:07.626 00:24:07.626 Latency(us) 00:24:07.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.626 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.626 Verification LBA range: start 0x0 length 0x4000 00:24:07.626 NVMe0n1 : 1.00 16669.01 65.11 0.00 0.00 7629.31 1884.16 21954.56 00:24:07.626 =================================================================================================================== 00:24:07.626 Total : 16669.01 65.11 0.00 0.00 7629.31 1884.16 21954.56 00:24:07.626 15:06:23 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.626 15:06:23 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:07.886 15:06:23 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:08.147 15:06:24 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:08.147 15:06:24 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:08.147 15:06:24 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:08.408 15:06:24 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 1935268 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1935268 ']' 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1935268 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1935268 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1935268' 00:24:11.708 killing process with pid 1935268 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1935268 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1935268 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:11.708 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:11.969 rmmod nvme_rdma 00:24:11.969 rmmod nvme_fabrics 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1930998 ']' 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1930998 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1930998 ']' 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1930998 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.969 15:06:27 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1930998 00:24:11.969 15:06:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.969 15:06:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.969 15:06:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1930998' 00:24:11.969 killing process with pid 1930998 00:24:11.969 15:06:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1930998 00:24:11.969 15:06:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1930998 00:24:12.230 15:06:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.230 15:06:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:12.230 00:24:12.230 real 0m38.066s 00:24:12.230 user 2m1.940s 00:24:12.230 sys 0m7.744s 00:24:12.230 15:06:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.230 15:06:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.230 ************************************ 00:24:12.230 END TEST nvmf_failover 00:24:12.230 ************************************ 00:24:12.230 15:06:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:12.230 15:06:28 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:12.230 15:06:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.230 15:06:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.230 15:06:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:12.230 ************************************ 00:24:12.230 START TEST nvmf_host_discovery 00:24:12.230 ************************************ 00:24:12.230 15:06:28 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:12.492 * Looking for test storage... 00:24:12.492 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.492 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:12.493 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:24:12.493 00:24:12.493 real 0m0.132s 00:24:12.493 user 0m0.057s 00:24:12.493 sys 0m0.083s 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.493 ************************************ 00:24:12.493 END TEST nvmf_host_discovery 00:24:12.493 ************************************ 00:24:12.493 15:06:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:12.493 15:06:28 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:12.493 15:06:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.493 15:06:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.493 15:06:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:12.493 ************************************ 00:24:12.493 START TEST nvmf_host_multipath_status 00:24:12.493 ************************************ 00:24:12.493 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:12.754 * Looking for test storage... 00:24:12.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.754 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.755 15:06:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.898 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:20.899 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:20.899 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:20.899 Found net devices under 0000:98:00.0: mlx_0_0 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:20.899 Found net devices under 0000:98:00.1: mlx_0_1 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:20.899 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.899 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:24:20.899 altname enp152s0f0np0 00:24:20.899 altname ens817f0np0 00:24:20.899 inet 192.168.100.8/24 scope global mlx_0_0 00:24:20.899 valid_lft forever preferred_lft forever 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:20.899 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.899 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:24:20.899 altname enp152s0f1np1 00:24:20.899 altname ens817f1np1 00:24:20.899 inet 192.168.100.9/24 scope global mlx_0_1 00:24:20.899 valid_lft forever preferred_lft forever 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.899 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:20.900 192.168.100.9' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:20.900 192.168.100.9' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:20.900 192.168.100.9' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1941686 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1941686 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1941686 ']' 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.900 15:06:36 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.900 [2024-07-15 15:06:36.778591] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:20.900 [2024-07-15 15:06:36.778656] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.900 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.900 [2024-07-15 15:06:36.848699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:20.900 [2024-07-15 15:06:36.921719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.900 [2024-07-15 15:06:36.921758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.900 [2024-07-15 15:06:36.921766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.900 [2024-07-15 15:06:36.921777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.900 [2024-07-15 15:06:36.921783] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.900 [2024-07-15 15:06:36.921918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.900 [2024-07-15 15:06:36.921920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1941686 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:21.840 [2024-07-15 15:06:37.757862] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x86eb70/0x873060) succeed. 00:24:21.840 [2024-07-15 15:06:37.770998] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x870070/0x8b46f0) succeed. 00:24:21.840 15:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:22.100 Malloc0 00:24:22.100 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:22.361 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.361 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:22.620 [2024-07-15 15:06:38.515032] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:22.620 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:22.881 [2024-07-15 15:06:38.683180] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:22.881 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1942037 00:24:22.881 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.881 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:22.881 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1942037 /var/tmp/bdevperf.sock 00:24:22.881 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1942037 ']' 00:24:22.881 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.881 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.882 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.882 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.882 15:06:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:23.450 15:06:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.450 15:06:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:23.450 15:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:23.710 15:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:23.970 Nvme0n1 00:24:23.970 15:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:24.230 Nvme0n1 00:24:24.230 15:06:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:24.230 15:06:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:26.241 15:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:26.241 15:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:24:26.503 15:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:26.503 15:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.886 15:06:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:28.147 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.147 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:28.147 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.147 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.409 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.669 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.669 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:28.669 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:28.669 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:28.929 15:06:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:29.870 15:06:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:29.870 15:06:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:29.870 15:06:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.870 15:06:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:30.131 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.131 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:30.131 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.131 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.391 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.652 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.912 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.912 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:30.912 15:06:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:31.171 15:06:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:24:31.171 15:06:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.551 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.811 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.811 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.811 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.811 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.072 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.072 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:33.072 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.072 15:06:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.072 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.072 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.072 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.072 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.333 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.333 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:33.333 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:33.593 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:24:33.593 15:06:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.975 15:06:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.234 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.493 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.493 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:35.493 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.493 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.752 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.752 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:35.752 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:24:35.752 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:24:36.012 15:06:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:36.952 15:06:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:36.952 15:06:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:36.952 15:06:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.952 15:06:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.213 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.474 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.474 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.474 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.474 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.734 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.995 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.995 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:37.995 15:06:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:24:38.256 15:06:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:38.257 15:06:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.643 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.904 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.904 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.904 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.904 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.904 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.905 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:40.167 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.167 15:06:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.167 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.167 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:40.167 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.167 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.429 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.429 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:40.429 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:40.429 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:24:40.690 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:40.951 15:06:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:41.894 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:41.894 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.894 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.894 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.154 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.154 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:42.154 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.154 15:06:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.155 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.155 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.155 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.155 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.416 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.676 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.676 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:42.677 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.677 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.937 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.937 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:42.937 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:42.937 15:06:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:43.210 15:06:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:44.152 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:44.152 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.152 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.152 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.414 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.414 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:44.414 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.414 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.676 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.934 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.934 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:44.934 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.934 15:07:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.194 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.194 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:45.194 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.194 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.194 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.194 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:45.194 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:45.455 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:24:45.716 15:07:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.657 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.916 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.916 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.916 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.916 15:07:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.177 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.436 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.436 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.436 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.436 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.695 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.695 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:47.695 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:47.695 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:24:47.955 15:07:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:48.894 15:07:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:48.894 15:07:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.894 15:07:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.894 15:07:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.155 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.155 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:49.155 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.155 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.416 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.675 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1942037 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1942037 ']' 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1942037 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1942037 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1942037' 00:24:49.936 killing process with pid 1942037 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1942037 00:24:49.936 15:07:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1942037 00:24:50.200 Connection closed with partial response: 00:24:50.200 00:24:50.200 00:24:50.200 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1942037 00:24:50.200 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.200 [2024-07-15 15:06:38.758151] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:50.200 [2024-07-15 15:06:38.758204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942037 ] 00:24:50.200 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.200 [2024-07-15 15:06:38.815024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.200 [2024-07-15 15:06:38.867717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.200 Running I/O for 90 seconds... 00:24:50.200 [2024-07-15 15:06:51.747843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.747878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.747911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184100 00:24:50.200 [2024-07-15 15:06:51.747918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.747926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184100 00:24:50.200 [2024-07-15 15:06:51.747932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.747940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184100 00:24:50.200 [2024-07-15 15:06:51.747945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.747952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.747957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.747965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.747969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.747977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.747982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.747989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.747994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.200 [2024-07-15 15:06:51.748553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.200 [2024-07-15 15:06:51.748558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.748906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.748911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.749118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.749136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184100 00:24:50.201 [2024-07-15 15:06:51.749603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.749619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.749976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.749991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.749996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.201 [2024-07-15 15:06:51.750195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.201 [2024-07-15 15:06:51.750209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:06:51.750508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:06:51.750521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:06:51.750526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.833747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.833782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184100 00:24:50.202 [2024-07-15 15:07:03.834839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.202 [2024-07-15 15:07:03.834863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.202 [2024-07-15 15:07:03.834870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184100 00:24:50.203 [2024-07-15 15:07:03.834900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x184100 00:24:50.203 [2024-07-15 15:07:03.834925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.834999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184100 00:24:50.203 [2024-07-15 15:07:03.835012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184100 00:24:50.203 [2024-07-15 15:07:03.835025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.835037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.835049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.835061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.835073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.835086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x184100 00:24:50.203 [2024-07-15 15:07:03.835098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184100 00:24:50.203 [2024-07-15 15:07:03.835111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.203 [2024-07-15 15:07:03.835123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.203 [2024-07-15 15:07:03.835131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x184100 00:24:50.203 [2024-07-15 15:07:03.835136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.203 Received shutdown signal, test time was about 25.621122 seconds 00:24:50.203 00:24:50.203 Latency(us) 00:24:50.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.203 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:50.203 Verification LBA range: start 0x0 length 0x4000 00:24:50.203 Nvme0n1 : 25.62 15658.70 61.17 0.00 0.00 8153.89 60.16 3019898.88 00:24:50.203 =================================================================================================================== 00:24:50.203 Total : 15658.70 61.17 0.00 0.00 8153.89 60.16 3019898.88 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.203 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:50.463 rmmod nvme_rdma 00:24:50.463 rmmod nvme_fabrics 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1941686 ']' 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1941686 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1941686 ']' 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1941686 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1941686 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1941686' 00:24:50.463 killing process with pid 1941686 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1941686 00:24:50.463 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1941686 00:24:50.724 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.724 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:50.724 00:24:50.724 real 0m38.060s 00:24:50.724 user 1m43.203s 00:24:50.724 sys 0m9.400s 00:24:50.724 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.724 15:07:06 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:50.724 ************************************ 00:24:50.724 END TEST nvmf_host_multipath_status 00:24:50.724 ************************************ 00:24:50.724 15:07:06 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:50.724 15:07:06 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:50.724 15:07:06 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.724 15:07:06 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.724 15:07:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:50.724 ************************************ 00:24:50.724 START TEST nvmf_discovery_remove_ifc 00:24:50.724 ************************************ 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:50.724 * Looking for test storage... 00:24:50.724 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:50.724 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:24:50.724 00:24:50.724 real 0m0.124s 00:24:50.724 user 0m0.065s 00:24:50.724 sys 0m0.067s 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.724 15:07:06 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.724 ************************************ 00:24:50.724 END TEST nvmf_discovery_remove_ifc 00:24:50.724 ************************************ 00:24:50.985 15:07:06 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:50.985 15:07:06 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:50.985 15:07:06 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.985 15:07:06 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.985 15:07:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:50.985 ************************************ 00:24:50.985 START TEST nvmf_identify_kernel_target 00:24:50.985 ************************************ 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:50.985 * Looking for test storage... 00:24:50.985 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.985 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.986 15:07:06 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:59.218 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:59.219 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:59.219 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:59.219 Found net devices under 0000:98:00.0: mlx_0_0 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:59.219 Found net devices under 0000:98:00.1: mlx_0_1 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:59.219 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:59.219 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:24:59.219 altname enp152s0f0np0 00:24:59.219 altname ens817f0np0 00:24:59.219 inet 192.168.100.8/24 scope global mlx_0_0 00:24:59.219 valid_lft forever preferred_lft forever 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:59.219 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:59.219 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:24:59.219 altname enp152s0f1np1 00:24:59.219 altname ens817f1np1 00:24:59.219 inet 192.168.100.9/24 scope global mlx_0_1 00:24:59.219 valid_lft forever preferred_lft forever 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:59.219 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:59.220 192.168.100.9' 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:59.220 192.168.100.9' 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:59.220 192.168.100.9' 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:24:59.220 15:07:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:59.220 15:07:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:03.428 Waiting for block devices as requested 00:25:03.428 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:03.428 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:25:03.688 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:03.688 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:03.688 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:03.949 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:03.949 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:03.949 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:03.949 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:04.210 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:04.210 No valid GPT data, bailing 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:04.210 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:25:04.471 00:25:04.471 Discovery Log Number of Records 2, Generation counter 2 00:25:04.471 =====Discovery Log Entry 0====== 00:25:04.471 trtype: rdma 00:25:04.471 adrfam: ipv4 00:25:04.471 subtype: current discovery subsystem 00:25:04.471 treq: not specified, sq flow control disable supported 00:25:04.471 portid: 1 00:25:04.471 trsvcid: 4420 00:25:04.471 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:04.471 traddr: 192.168.100.8 00:25:04.471 eflags: none 00:25:04.471 rdma_prtype: not specified 00:25:04.471 rdma_qptype: connected 00:25:04.471 rdma_cms: rdma-cm 00:25:04.471 rdma_pkey: 0x0000 00:25:04.471 =====Discovery Log Entry 1====== 00:25:04.471 trtype: rdma 00:25:04.471 adrfam: ipv4 00:25:04.471 subtype: nvme subsystem 00:25:04.471 treq: not specified, sq flow control disable supported 00:25:04.471 portid: 1 00:25:04.471 trsvcid: 4420 00:25:04.471 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:04.471 traddr: 192.168.100.8 00:25:04.471 eflags: none 00:25:04.471 rdma_prtype: not specified 00:25:04.471 rdma_qptype: connected 00:25:04.471 rdma_cms: rdma-cm 00:25:04.471 rdma_pkey: 0x0000 00:25:04.471 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:25:04.471 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:04.471 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.471 ===================================================== 00:25:04.471 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:04.471 ===================================================== 00:25:04.471 Controller Capabilities/Features 00:25:04.471 ================================ 00:25:04.471 Vendor ID: 0000 00:25:04.471 Subsystem Vendor ID: 0000 00:25:04.471 Serial Number: 8b0fd1dfbdcb75017d53 00:25:04.471 Model Number: Linux 00:25:04.471 Firmware Version: 6.7.0-68 00:25:04.471 Recommended Arb Burst: 0 00:25:04.471 IEEE OUI Identifier: 00 00 00 00:25:04.471 Multi-path I/O 00:25:04.471 May have multiple subsystem ports: No 00:25:04.471 May have multiple controllers: No 00:25:04.471 Associated with SR-IOV VF: No 00:25:04.471 Max Data Transfer Size: Unlimited 00:25:04.471 Max Number of Namespaces: 0 00:25:04.471 Max Number of I/O Queues: 1024 00:25:04.471 NVMe Specification Version (VS): 1.3 00:25:04.471 NVMe Specification Version (Identify): 1.3 00:25:04.471 Maximum Queue Entries: 128 00:25:04.471 Contiguous Queues Required: No 00:25:04.471 Arbitration Mechanisms Supported 00:25:04.471 Weighted Round Robin: Not Supported 00:25:04.471 Vendor Specific: Not Supported 00:25:04.471 Reset Timeout: 7500 ms 00:25:04.471 Doorbell Stride: 4 bytes 00:25:04.471 NVM Subsystem Reset: Not Supported 00:25:04.471 Command Sets Supported 00:25:04.471 NVM Command Set: Supported 00:25:04.471 Boot Partition: Not Supported 00:25:04.471 Memory Page Size Minimum: 4096 bytes 00:25:04.471 Memory Page Size Maximum: 4096 bytes 00:25:04.471 Persistent Memory Region: Not Supported 00:25:04.471 Optional Asynchronous Events Supported 00:25:04.471 Namespace Attribute Notices: Not Supported 00:25:04.471 Firmware Activation Notices: Not Supported 00:25:04.471 ANA Change Notices: Not Supported 00:25:04.471 PLE Aggregate Log Change Notices: Not Supported 00:25:04.471 LBA Status Info Alert Notices: Not Supported 00:25:04.471 EGE Aggregate Log Change Notices: Not Supported 00:25:04.471 Normal NVM Subsystem Shutdown event: Not Supported 00:25:04.471 Zone Descriptor Change Notices: Not Supported 00:25:04.471 Discovery Log Change Notices: Supported 00:25:04.471 Controller Attributes 00:25:04.471 128-bit Host Identifier: Not Supported 00:25:04.471 Non-Operational Permissive Mode: Not Supported 00:25:04.471 NVM Sets: Not Supported 00:25:04.472 Read Recovery Levels: Not Supported 00:25:04.472 Endurance Groups: Not Supported 00:25:04.472 Predictable Latency Mode: Not Supported 00:25:04.472 Traffic Based Keep ALive: Not Supported 00:25:04.472 Namespace Granularity: Not Supported 00:25:04.472 SQ Associations: Not Supported 00:25:04.472 UUID List: Not Supported 00:25:04.472 Multi-Domain Subsystem: Not Supported 00:25:04.472 Fixed Capacity Management: Not Supported 00:25:04.472 Variable Capacity Management: Not Supported 00:25:04.472 Delete Endurance Group: Not Supported 00:25:04.472 Delete NVM Set: Not Supported 00:25:04.472 Extended LBA Formats Supported: Not Supported 00:25:04.472 Flexible Data Placement Supported: Not Supported 00:25:04.472 00:25:04.472 Controller Memory Buffer Support 00:25:04.472 ================================ 00:25:04.472 Supported: No 00:25:04.472 00:25:04.472 Persistent Memory Region Support 00:25:04.472 ================================ 00:25:04.472 Supported: No 00:25:04.472 00:25:04.472 Admin Command Set Attributes 00:25:04.472 ============================ 00:25:04.472 Security Send/Receive: Not Supported 00:25:04.472 Format NVM: Not Supported 00:25:04.472 Firmware Activate/Download: Not Supported 00:25:04.472 Namespace Management: Not Supported 00:25:04.472 Device Self-Test: Not Supported 00:25:04.472 Directives: Not Supported 00:25:04.472 NVMe-MI: Not Supported 00:25:04.472 Virtualization Management: Not Supported 00:25:04.472 Doorbell Buffer Config: Not Supported 00:25:04.472 Get LBA Status Capability: Not Supported 00:25:04.472 Command & Feature Lockdown Capability: Not Supported 00:25:04.472 Abort Command Limit: 1 00:25:04.472 Async Event Request Limit: 1 00:25:04.472 Number of Firmware Slots: N/A 00:25:04.472 Firmware Slot 1 Read-Only: N/A 00:25:04.472 Firmware Activation Without Reset: N/A 00:25:04.472 Multiple Update Detection Support: N/A 00:25:04.472 Firmware Update Granularity: No Information Provided 00:25:04.472 Per-Namespace SMART Log: No 00:25:04.472 Asymmetric Namespace Access Log Page: Not Supported 00:25:04.472 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:04.472 Command Effects Log Page: Not Supported 00:25:04.472 Get Log Page Extended Data: Supported 00:25:04.472 Telemetry Log Pages: Not Supported 00:25:04.472 Persistent Event Log Pages: Not Supported 00:25:04.472 Supported Log Pages Log Page: May Support 00:25:04.472 Commands Supported & Effects Log Page: Not Supported 00:25:04.472 Feature Identifiers & Effects Log Page:May Support 00:25:04.472 NVMe-MI Commands & Effects Log Page: May Support 00:25:04.472 Data Area 4 for Telemetry Log: Not Supported 00:25:04.472 Error Log Page Entries Supported: 1 00:25:04.472 Keep Alive: Not Supported 00:25:04.472 00:25:04.472 NVM Command Set Attributes 00:25:04.472 ========================== 00:25:04.472 Submission Queue Entry Size 00:25:04.472 Max: 1 00:25:04.472 Min: 1 00:25:04.472 Completion Queue Entry Size 00:25:04.472 Max: 1 00:25:04.472 Min: 1 00:25:04.472 Number of Namespaces: 0 00:25:04.472 Compare Command: Not Supported 00:25:04.472 Write Uncorrectable Command: Not Supported 00:25:04.472 Dataset Management Command: Not Supported 00:25:04.472 Write Zeroes Command: Not Supported 00:25:04.472 Set Features Save Field: Not Supported 00:25:04.472 Reservations: Not Supported 00:25:04.472 Timestamp: Not Supported 00:25:04.472 Copy: Not Supported 00:25:04.472 Volatile Write Cache: Not Present 00:25:04.472 Atomic Write Unit (Normal): 1 00:25:04.472 Atomic Write Unit (PFail): 1 00:25:04.472 Atomic Compare & Write Unit: 1 00:25:04.472 Fused Compare & Write: Not Supported 00:25:04.472 Scatter-Gather List 00:25:04.472 SGL Command Set: Supported 00:25:04.472 SGL Keyed: Supported 00:25:04.472 SGL Bit Bucket Descriptor: Not Supported 00:25:04.472 SGL Metadata Pointer: Not Supported 00:25:04.472 Oversized SGL: Not Supported 00:25:04.472 SGL Metadata Address: Not Supported 00:25:04.472 SGL Offset: Supported 00:25:04.472 Transport SGL Data Block: Not Supported 00:25:04.472 Replay Protected Memory Block: Not Supported 00:25:04.472 00:25:04.472 Firmware Slot Information 00:25:04.472 ========================= 00:25:04.472 Active slot: 0 00:25:04.472 00:25:04.472 00:25:04.472 Error Log 00:25:04.472 ========= 00:25:04.472 00:25:04.472 Active Namespaces 00:25:04.472 ================= 00:25:04.472 Discovery Log Page 00:25:04.472 ================== 00:25:04.472 Generation Counter: 2 00:25:04.472 Number of Records: 2 00:25:04.472 Record Format: 0 00:25:04.472 00:25:04.472 Discovery Log Entry 0 00:25:04.472 ---------------------- 00:25:04.472 Transport Type: 1 (RDMA) 00:25:04.472 Address Family: 1 (IPv4) 00:25:04.472 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:04.472 Entry Flags: 00:25:04.472 Duplicate Returned Information: 0 00:25:04.472 Explicit Persistent Connection Support for Discovery: 0 00:25:04.472 Transport Requirements: 00:25:04.472 Secure Channel: Not Specified 00:25:04.472 Port ID: 1 (0x0001) 00:25:04.472 Controller ID: 65535 (0xffff) 00:25:04.472 Admin Max SQ Size: 32 00:25:04.472 Transport Service Identifier: 4420 00:25:04.472 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:04.472 Transport Address: 192.168.100.8 00:25:04.472 Transport Specific Address Subtype - RDMA 00:25:04.472 RDMA QP Service Type: 1 (Reliable Connected) 00:25:04.472 RDMA Provider Type: 1 (No provider specified) 00:25:04.472 RDMA CM Service: 1 (RDMA_CM) 00:25:04.472 Discovery Log Entry 1 00:25:04.472 ---------------------- 00:25:04.472 Transport Type: 1 (RDMA) 00:25:04.472 Address Family: 1 (IPv4) 00:25:04.472 Subsystem Type: 2 (NVM Subsystem) 00:25:04.472 Entry Flags: 00:25:04.472 Duplicate Returned Information: 0 00:25:04.472 Explicit Persistent Connection Support for Discovery: 0 00:25:04.472 Transport Requirements: 00:25:04.472 Secure Channel: Not Specified 00:25:04.472 Port ID: 1 (0x0001) 00:25:04.472 Controller ID: 65535 (0xffff) 00:25:04.472 Admin Max SQ Size: 32 00:25:04.472 Transport Service Identifier: 4420 00:25:04.472 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:04.472 Transport Address: 192.168.100.8 00:25:04.472 Transport Specific Address Subtype - RDMA 00:25:04.472 RDMA QP Service Type: 1 (Reliable Connected) 00:25:04.472 RDMA Provider Type: 1 (No provider specified) 00:25:04.472 RDMA CM Service: 1 (RDMA_CM) 00:25:04.472 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:04.472 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.734 get_feature(0x01) failed 00:25:04.734 get_feature(0x02) failed 00:25:04.734 get_feature(0x04) failed 00:25:04.734 ===================================================== 00:25:04.734 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:25:04.734 ===================================================== 00:25:04.734 Controller Capabilities/Features 00:25:04.734 ================================ 00:25:04.734 Vendor ID: 0000 00:25:04.734 Subsystem Vendor ID: 0000 00:25:04.734 Serial Number: 79eba9de7a16d61ab44c 00:25:04.734 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:04.734 Firmware Version: 6.7.0-68 00:25:04.734 Recommended Arb Burst: 6 00:25:04.734 IEEE OUI Identifier: 00 00 00 00:25:04.734 Multi-path I/O 00:25:04.734 May have multiple subsystem ports: Yes 00:25:04.734 May have multiple controllers: Yes 00:25:04.734 Associated with SR-IOV VF: No 00:25:04.734 Max Data Transfer Size: 1048576 00:25:04.734 Max Number of Namespaces: 1024 00:25:04.734 Max Number of I/O Queues: 128 00:25:04.734 NVMe Specification Version (VS): 1.3 00:25:04.734 NVMe Specification Version (Identify): 1.3 00:25:04.734 Maximum Queue Entries: 128 00:25:04.734 Contiguous Queues Required: No 00:25:04.734 Arbitration Mechanisms Supported 00:25:04.734 Weighted Round Robin: Not Supported 00:25:04.734 Vendor Specific: Not Supported 00:25:04.734 Reset Timeout: 7500 ms 00:25:04.734 Doorbell Stride: 4 bytes 00:25:04.734 NVM Subsystem Reset: Not Supported 00:25:04.734 Command Sets Supported 00:25:04.734 NVM Command Set: Supported 00:25:04.734 Boot Partition: Not Supported 00:25:04.734 Memory Page Size Minimum: 4096 bytes 00:25:04.734 Memory Page Size Maximum: 4096 bytes 00:25:04.734 Persistent Memory Region: Not Supported 00:25:04.734 Optional Asynchronous Events Supported 00:25:04.734 Namespace Attribute Notices: Supported 00:25:04.734 Firmware Activation Notices: Not Supported 00:25:04.734 ANA Change Notices: Supported 00:25:04.734 PLE Aggregate Log Change Notices: Not Supported 00:25:04.734 LBA Status Info Alert Notices: Not Supported 00:25:04.734 EGE Aggregate Log Change Notices: Not Supported 00:25:04.734 Normal NVM Subsystem Shutdown event: Not Supported 00:25:04.734 Zone Descriptor Change Notices: Not Supported 00:25:04.734 Discovery Log Change Notices: Not Supported 00:25:04.734 Controller Attributes 00:25:04.734 128-bit Host Identifier: Supported 00:25:04.734 Non-Operational Permissive Mode: Not Supported 00:25:04.734 NVM Sets: Not Supported 00:25:04.734 Read Recovery Levels: Not Supported 00:25:04.734 Endurance Groups: Not Supported 00:25:04.734 Predictable Latency Mode: Not Supported 00:25:04.734 Traffic Based Keep ALive: Supported 00:25:04.734 Namespace Granularity: Not Supported 00:25:04.734 SQ Associations: Not Supported 00:25:04.734 UUID List: Not Supported 00:25:04.734 Multi-Domain Subsystem: Not Supported 00:25:04.734 Fixed Capacity Management: Not Supported 00:25:04.734 Variable Capacity Management: Not Supported 00:25:04.734 Delete Endurance Group: Not Supported 00:25:04.734 Delete NVM Set: Not Supported 00:25:04.734 Extended LBA Formats Supported: Not Supported 00:25:04.734 Flexible Data Placement Supported: Not Supported 00:25:04.734 00:25:04.734 Controller Memory Buffer Support 00:25:04.734 ================================ 00:25:04.734 Supported: No 00:25:04.734 00:25:04.734 Persistent Memory Region Support 00:25:04.734 ================================ 00:25:04.734 Supported: No 00:25:04.734 00:25:04.734 Admin Command Set Attributes 00:25:04.734 ============================ 00:25:04.734 Security Send/Receive: Not Supported 00:25:04.734 Format NVM: Not Supported 00:25:04.734 Firmware Activate/Download: Not Supported 00:25:04.734 Namespace Management: Not Supported 00:25:04.734 Device Self-Test: Not Supported 00:25:04.734 Directives: Not Supported 00:25:04.734 NVMe-MI: Not Supported 00:25:04.734 Virtualization Management: Not Supported 00:25:04.734 Doorbell Buffer Config: Not Supported 00:25:04.734 Get LBA Status Capability: Not Supported 00:25:04.734 Command & Feature Lockdown Capability: Not Supported 00:25:04.734 Abort Command Limit: 4 00:25:04.734 Async Event Request Limit: 4 00:25:04.734 Number of Firmware Slots: N/A 00:25:04.734 Firmware Slot 1 Read-Only: N/A 00:25:04.734 Firmware Activation Without Reset: N/A 00:25:04.734 Multiple Update Detection Support: N/A 00:25:04.734 Firmware Update Granularity: No Information Provided 00:25:04.734 Per-Namespace SMART Log: Yes 00:25:04.734 Asymmetric Namespace Access Log Page: Supported 00:25:04.734 ANA Transition Time : 10 sec 00:25:04.734 00:25:04.734 Asymmetric Namespace Access Capabilities 00:25:04.734 ANA Optimized State : Supported 00:25:04.734 ANA Non-Optimized State : Supported 00:25:04.734 ANA Inaccessible State : Supported 00:25:04.734 ANA Persistent Loss State : Supported 00:25:04.734 ANA Change State : Supported 00:25:04.734 ANAGRPID is not changed : No 00:25:04.734 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:04.734 00:25:04.734 ANA Group Identifier Maximum : 128 00:25:04.734 Number of ANA Group Identifiers : 128 00:25:04.734 Max Number of Allowed Namespaces : 1024 00:25:04.734 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:04.734 Command Effects Log Page: Supported 00:25:04.734 Get Log Page Extended Data: Supported 00:25:04.734 Telemetry Log Pages: Not Supported 00:25:04.734 Persistent Event Log Pages: Not Supported 00:25:04.734 Supported Log Pages Log Page: May Support 00:25:04.734 Commands Supported & Effects Log Page: Not Supported 00:25:04.734 Feature Identifiers & Effects Log Page:May Support 00:25:04.734 NVMe-MI Commands & Effects Log Page: May Support 00:25:04.734 Data Area 4 for Telemetry Log: Not Supported 00:25:04.734 Error Log Page Entries Supported: 128 00:25:04.734 Keep Alive: Supported 00:25:04.734 Keep Alive Granularity: 1000 ms 00:25:04.734 00:25:04.734 NVM Command Set Attributes 00:25:04.734 ========================== 00:25:04.734 Submission Queue Entry Size 00:25:04.734 Max: 64 00:25:04.734 Min: 64 00:25:04.734 Completion Queue Entry Size 00:25:04.734 Max: 16 00:25:04.734 Min: 16 00:25:04.734 Number of Namespaces: 1024 00:25:04.734 Compare Command: Not Supported 00:25:04.734 Write Uncorrectable Command: Not Supported 00:25:04.734 Dataset Management Command: Supported 00:25:04.734 Write Zeroes Command: Supported 00:25:04.734 Set Features Save Field: Not Supported 00:25:04.734 Reservations: Not Supported 00:25:04.734 Timestamp: Not Supported 00:25:04.734 Copy: Not Supported 00:25:04.734 Volatile Write Cache: Present 00:25:04.734 Atomic Write Unit (Normal): 1 00:25:04.734 Atomic Write Unit (PFail): 1 00:25:04.734 Atomic Compare & Write Unit: 1 00:25:04.734 Fused Compare & Write: Not Supported 00:25:04.734 Scatter-Gather List 00:25:04.734 SGL Command Set: Supported 00:25:04.734 SGL Keyed: Supported 00:25:04.734 SGL Bit Bucket Descriptor: Not Supported 00:25:04.734 SGL Metadata Pointer: Not Supported 00:25:04.734 Oversized SGL: Not Supported 00:25:04.734 SGL Metadata Address: Not Supported 00:25:04.734 SGL Offset: Supported 00:25:04.734 Transport SGL Data Block: Not Supported 00:25:04.734 Replay Protected Memory Block: Not Supported 00:25:04.734 00:25:04.734 Firmware Slot Information 00:25:04.734 ========================= 00:25:04.734 Active slot: 0 00:25:04.734 00:25:04.734 Asymmetric Namespace Access 00:25:04.734 =========================== 00:25:04.734 Change Count : 0 00:25:04.734 Number of ANA Group Descriptors : 1 00:25:04.734 ANA Group Descriptor : 0 00:25:04.734 ANA Group ID : 1 00:25:04.734 Number of NSID Values : 1 00:25:04.734 Change Count : 0 00:25:04.734 ANA State : 1 00:25:04.734 Namespace Identifier : 1 00:25:04.734 00:25:04.734 Commands Supported and Effects 00:25:04.734 ============================== 00:25:04.734 Admin Commands 00:25:04.734 -------------- 00:25:04.734 Get Log Page (02h): Supported 00:25:04.734 Identify (06h): Supported 00:25:04.734 Abort (08h): Supported 00:25:04.734 Set Features (09h): Supported 00:25:04.734 Get Features (0Ah): Supported 00:25:04.734 Asynchronous Event Request (0Ch): Supported 00:25:04.734 Keep Alive (18h): Supported 00:25:04.734 I/O Commands 00:25:04.734 ------------ 00:25:04.734 Flush (00h): Supported 00:25:04.734 Write (01h): Supported LBA-Change 00:25:04.734 Read (02h): Supported 00:25:04.734 Write Zeroes (08h): Supported LBA-Change 00:25:04.734 Dataset Management (09h): Supported 00:25:04.734 00:25:04.734 Error Log 00:25:04.734 ========= 00:25:04.734 Entry: 0 00:25:04.735 Error Count: 0x3 00:25:04.735 Submission Queue Id: 0x0 00:25:04.735 Command Id: 0x5 00:25:04.735 Phase Bit: 0 00:25:04.735 Status Code: 0x2 00:25:04.735 Status Code Type: 0x0 00:25:04.735 Do Not Retry: 1 00:25:04.735 Error Location: 0x28 00:25:04.735 LBA: 0x0 00:25:04.735 Namespace: 0x0 00:25:04.735 Vendor Log Page: 0x0 00:25:04.735 ----------- 00:25:04.735 Entry: 1 00:25:04.735 Error Count: 0x2 00:25:04.735 Submission Queue Id: 0x0 00:25:04.735 Command Id: 0x5 00:25:04.735 Phase Bit: 0 00:25:04.735 Status Code: 0x2 00:25:04.735 Status Code Type: 0x0 00:25:04.735 Do Not Retry: 1 00:25:04.735 Error Location: 0x28 00:25:04.735 LBA: 0x0 00:25:04.735 Namespace: 0x0 00:25:04.735 Vendor Log Page: 0x0 00:25:04.735 ----------- 00:25:04.735 Entry: 2 00:25:04.735 Error Count: 0x1 00:25:04.735 Submission Queue Id: 0x0 00:25:04.735 Command Id: 0x0 00:25:04.735 Phase Bit: 0 00:25:04.735 Status Code: 0x2 00:25:04.735 Status Code Type: 0x0 00:25:04.735 Do Not Retry: 1 00:25:04.735 Error Location: 0x28 00:25:04.735 LBA: 0x0 00:25:04.735 Namespace: 0x0 00:25:04.735 Vendor Log Page: 0x0 00:25:04.735 00:25:04.735 Number of Queues 00:25:04.735 ================ 00:25:04.735 Number of I/O Submission Queues: 128 00:25:04.735 Number of I/O Completion Queues: 128 00:25:04.735 00:25:04.735 ZNS Specific Controller Data 00:25:04.735 ============================ 00:25:04.735 Zone Append Size Limit: 0 00:25:04.735 00:25:04.735 00:25:04.735 Active Namespaces 00:25:04.735 ================= 00:25:04.735 get_feature(0x05) failed 00:25:04.735 Namespace ID:1 00:25:04.735 Command Set Identifier: NVM (00h) 00:25:04.735 Deallocate: Supported 00:25:04.735 Deallocated/Unwritten Error: Not Supported 00:25:04.735 Deallocated Read Value: Unknown 00:25:04.735 Deallocate in Write Zeroes: Not Supported 00:25:04.735 Deallocated Guard Field: 0xFFFF 00:25:04.735 Flush: Supported 00:25:04.735 Reservation: Not Supported 00:25:04.735 Namespace Sharing Capabilities: Multiple Controllers 00:25:04.735 Size (in LBAs): 3750748848 (1788GiB) 00:25:04.735 Capacity (in LBAs): 3750748848 (1788GiB) 00:25:04.735 Utilization (in LBAs): 3750748848 (1788GiB) 00:25:04.735 UUID: 5f807e47-3832-4efe-9bc0-9a1234363e86 00:25:04.735 Thin Provisioning: Not Supported 00:25:04.735 Per-NS Atomic Units: Yes 00:25:04.735 Atomic Write Unit (Normal): 8 00:25:04.735 Atomic Write Unit (PFail): 8 00:25:04.735 Preferred Write Granularity: 8 00:25:04.735 Atomic Compare & Write Unit: 8 00:25:04.735 Atomic Boundary Size (Normal): 0 00:25:04.735 Atomic Boundary Size (PFail): 0 00:25:04.735 Atomic Boundary Offset: 0 00:25:04.735 NGUID/EUI64 Never Reused: No 00:25:04.735 ANA group ID: 1 00:25:04.735 Namespace Write Protected: No 00:25:04.735 Number of LBA Formats: 1 00:25:04.735 Current LBA Format: LBA Format #00 00:25:04.735 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:04.735 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:04.735 rmmod nvme_rdma 00:25:04.735 rmmod nvme_fabrics 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:25:04.735 15:07:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:08.944 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:08.944 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:10.861 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:25:10.861 00:25:10.861 real 0m19.649s 00:25:10.861 user 0m5.660s 00:25:10.861 sys 0m11.519s 00:25:10.861 15:07:26 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:10.861 15:07:26 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.861 ************************************ 00:25:10.861 END TEST nvmf_identify_kernel_target 00:25:10.861 ************************************ 00:25:10.861 15:07:26 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:25:10.861 15:07:26 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:10.861 15:07:26 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:10.861 15:07:26 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.861 15:07:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:10.861 ************************************ 00:25:10.861 START TEST nvmf_auth_host 00:25:10.861 ************************************ 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:10.861 * Looking for test storage... 00:25:10.861 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.861 15:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:18.998 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:18.998 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:18.998 Found net devices under 0000:98:00.0: mlx_0_0 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:18.998 Found net devices under 0000:98:00.1: mlx_0_1 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:18.998 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:18.999 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:18.999 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:25:18.999 altname enp152s0f0np0 00:25:18.999 altname ens817f0np0 00:25:18.999 inet 192.168.100.8/24 scope global mlx_0_0 00:25:18.999 valid_lft forever preferred_lft forever 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:18.999 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:18.999 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:25:18.999 altname enp152s0f1np1 00:25:18.999 altname ens817f1np1 00:25:18.999 inet 192.168.100.9/24 scope global mlx_0_1 00:25:18.999 valid_lft forever preferred_lft forever 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:18.999 192.168.100.9' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:18.999 192.168.100.9' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:18.999 192.168.100.9' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1960311 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1960311 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1960311 ']' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.999 15:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2772755c0373841f12904f6e98bd2788 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1QP 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2772755c0373841f12904f6e98bd2788 0 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2772755c0373841f12904f6e98bd2788 0 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2772755c0373841f12904f6e98bd2788 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1QP 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1QP 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.1QP 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f998f6f15cb548f986a6f94a50034bb5b114d1e6e025c3936563c1fb19ccf32 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Hgm 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f998f6f15cb548f986a6f94a50034bb5b114d1e6e025c3936563c1fb19ccf32 3 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f998f6f15cb548f986a6f94a50034bb5b114d1e6e025c3936563c1fb19ccf32 3 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:19.569 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f998f6f15cb548f986a6f94a50034bb5b114d1e6e025c3936563c1fb19ccf32 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Hgm 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Hgm 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Hgm 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=49375a39a1b385a47bfd89e13c1418634e9c668436c3d027 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.d8O 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 49375a39a1b385a47bfd89e13c1418634e9c668436c3d027 0 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 49375a39a1b385a47bfd89e13c1418634e9c668436c3d027 0 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=49375a39a1b385a47bfd89e13c1418634e9c668436c3d027 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:19.570 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:19.830 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.d8O 00:25:19.830 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.d8O 00:25:19.830 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.d8O 00:25:19.830 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:19.830 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:19.830 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:19.830 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c598b47a44d85651e55facc3cc604e15a2a300e17e646822 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Wx7 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c598b47a44d85651e55facc3cc604e15a2a300e17e646822 2 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c598b47a44d85651e55facc3cc604e15a2a300e17e646822 2 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c598b47a44d85651e55facc3cc604e15a2a300e17e646822 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Wx7 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Wx7 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Wx7 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e712788fadc25c09750b26ad11198d1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YTj 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e712788fadc25c09750b26ad11198d1 1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e712788fadc25c09750b26ad11198d1 1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e712788fadc25c09750b26ad11198d1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YTj 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YTj 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.YTj 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=959ce15f7b4baf28d8ce8fc0c276e783 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.XhC 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 959ce15f7b4baf28d8ce8fc0c276e783 1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 959ce15f7b4baf28d8ce8fc0c276e783 1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=959ce15f7b4baf28d8ce8fc0c276e783 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.XhC 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.XhC 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.XhC 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=abe0d4120be7ed134c1c53bb720695a0aa7c5a3bc9cf0d28 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.145 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key abe0d4120be7ed134c1c53bb720695a0aa7c5a3bc9cf0d28 2 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 abe0d4120be7ed134c1c53bb720695a0aa7c5a3bc9cf0d28 2 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=abe0d4120be7ed134c1c53bb720695a0aa7c5a3bc9cf0d28 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:19.831 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.145 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.145 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.145 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e67e9fab8232be0a8bc80299de232518 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.qzC 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e67e9fab8232be0a8bc80299de232518 0 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e67e9fab8232be0a8bc80299de232518 0 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e67e9fab8232be0a8bc80299de232518 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.qzC 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.qzC 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.qzC 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=06911eacd0a025827012c2cace5a1cbb12a63460a05c9566b358c9242d3879f9 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hUI 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 06911eacd0a025827012c2cace5a1cbb12a63460a05c9566b358c9242d3879f9 3 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 06911eacd0a025827012c2cace5a1cbb12a63460a05c9566b358c9242d3879f9 3 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=06911eacd0a025827012c2cace5a1cbb12a63460a05c9566b358c9242d3879f9 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:20.091 15:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hUI 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hUI 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hUI 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1960311 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1960311 ']' 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.091 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1QP 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Hgm ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hgm 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.d8O 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Wx7 ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wx7 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.YTj 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.XhC ]] 00:25:20.351 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XhC 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.145 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.qzC ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.qzC 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hUI 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:20.352 15:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:24.551 Waiting for block devices as requested 00:25:24.551 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:24.551 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:25:24.812 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:24.812 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:24.812 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:25.072 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:25.072 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:25.072 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:25.072 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:25.332 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:25.901 No valid GPT data, bailing 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:25.901 15:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:25:26.161 00:25:26.161 Discovery Log Number of Records 2, Generation counter 2 00:25:26.161 =====Discovery Log Entry 0====== 00:25:26.161 trtype: rdma 00:25:26.161 adrfam: ipv4 00:25:26.161 subtype: current discovery subsystem 00:25:26.161 treq: not specified, sq flow control disable supported 00:25:26.161 portid: 1 00:25:26.161 trsvcid: 4420 00:25:26.161 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:26.161 traddr: 192.168.100.8 00:25:26.161 eflags: none 00:25:26.161 rdma_prtype: not specified 00:25:26.161 rdma_qptype: connected 00:25:26.161 rdma_cms: rdma-cm 00:25:26.161 rdma_pkey: 0x0000 00:25:26.161 =====Discovery Log Entry 1====== 00:25:26.161 trtype: rdma 00:25:26.161 adrfam: ipv4 00:25:26.161 subtype: nvme subsystem 00:25:26.161 treq: not specified, sq flow control disable supported 00:25:26.161 portid: 1 00:25:26.161 trsvcid: 4420 00:25:26.161 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:26.161 traddr: 192.168.100.8 00:25:26.161 eflags: none 00:25:26.161 rdma_prtype: not specified 00:25:26.161 rdma_qptype: connected 00:25:26.161 rdma_cms: rdma-cm 00:25:26.161 rdma_pkey: 0x0000 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.161 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.421 nvme0n1 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.421 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.681 nvme0n1 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.681 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.941 nvme0n1 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:26.941 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.942 15:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.202 nvme0n1 00:25:27.202 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.202 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.202 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.202 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.202 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.202 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.461 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 nvme0n1 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.720 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.721 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.981 nvme0n1 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:27.981 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.982 15:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.242 nvme0n1 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.242 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.502 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.762 nvme0n1 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.762 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:28.763 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:28.763 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:28.763 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:28.763 15:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:28.763 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.763 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.763 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 nvme0n1 00:25:29.023 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.023 15:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.023 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 15:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.023 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.283 nvme0n1 00:25:29.283 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.283 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.283 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.283 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.283 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.543 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.804 nvme0n1 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.804 15:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.374 nvme0n1 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.374 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.375 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 nvme0n1 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.635 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.896 15:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.156 nvme0n1 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.156 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.727 nvme0n1 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:31.727 15:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:31.728 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.728 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.728 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.988 nvme0n1 00:25:31.988 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.988 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.988 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.988 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.988 15:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.988 15:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.988 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.248 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.818 nvme0n1 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.818 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.819 15:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.390 nvme0n1 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.390 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.961 nvme0n1 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.961 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:33.962 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:33.962 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:33.962 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:33.962 15:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:33.962 15:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.962 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.962 15:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.534 nvme0n1 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.534 15:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.103 nvme0n1 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.103 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:35.104 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.364 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.360 nvme0n1 00:25:36.360 15:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:36.360 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:36.361 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:36.361 15:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:36.361 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.361 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.361 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.027 nvme0n1 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.027 15:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.027 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.983 nvme0n1 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.983 15:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.924 nvme0n1 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:38.924 15:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.925 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.925 15:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.866 nvme0n1 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.866 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.867 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.127 nvme0n1 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.127 15:07:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.127 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.388 nvme0n1 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.388 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 nvme0n1 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.649 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.910 nvme0n1 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.910 15:07:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.171 nvme0n1 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.171 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.431 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.432 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.692 nvme0n1 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.692 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.953 nvme0n1 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.953 15:07:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.213 nvme0n1 00:25:42.213 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.213 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.213 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.213 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.213 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.213 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:42.474 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:42.475 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:42.475 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:42.475 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.475 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.475 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.735 nvme0n1 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.735 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.996 nvme0n1 00:25:42.996 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.996 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.996 15:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.996 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.996 15:07:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.996 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.256 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.516 nvme0n1 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.516 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.085 nvme0n1 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:44.085 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.086 15:07:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.345 nvme0n1 00:25:44.345 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.345 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.345 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.345 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.346 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.346 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.346 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.346 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.346 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.346 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.606 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.866 nvme0n1 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.866 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.867 15:08:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.437 nvme0n1 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.437 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.009 nvme0n1 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.009 15:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.578 nvme0n1 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.578 15:08:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.148 nvme0n1 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.148 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.407 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.408 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.977 nvme0n1 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.977 15:08:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.549 nvme0n1 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.549 15:08:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.489 nvme0n1 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.489 15:08:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.428 nvme0n1 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.428 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.429 15:08:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 nvme0n1 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.369 15:08:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.308 nvme0n1 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.308 15:08:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.247 nvme0n1 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.247 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.508 nvme0n1 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.508 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:53.509 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:53.509 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:53.509 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:53.509 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:53.509 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.509 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.509 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.769 nvme0n1 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.769 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.030 nvme0n1 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.030 15:08:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.291 nvme0n1 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:54.291 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.292 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.552 nvme0n1 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.552 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.813 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.814 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.074 nvme0n1 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.074 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.075 15:08:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.075 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.335 nvme0n1 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.335 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.595 nvme0n1 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.595 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.856 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.117 nvme0n1 00:25:56.117 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.117 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.117 15:08:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.117 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.117 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.117 15:08:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.117 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.378 nvme0n1 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.378 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.638 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.899 nvme0n1 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.899 15:08:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.470 nvme0n1 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.470 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.730 nvme0n1 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:57.730 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:57.990 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:57.991 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:57.991 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:57.991 15:08:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:57.991 15:08:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.991 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.991 15:08:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.251 nvme0n1 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.251 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.822 nvme0n1 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.822 15:08:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.394 nvme0n1 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.394 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.965 nvme0n1 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.965 15:08:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.965 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.535 nvme0n1 00:26:00.535 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.535 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.535 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.535 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.535 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.535 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.796 15:08:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.367 nvme0n1 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.367 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.936 nvme0n1 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjc3Mjc1NWMwMzczODQxZjEyOTA0ZjZlOThiZDI3ODhfyogK: 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY5OThmNmYxNWNiNTQ4Zjk4NmE2Zjk0YTUwMDM0YmI1YjExNGQxZTZlMDI1YzM5MzY1NjNjMWZiMTljY2YzMuIbSFY=: 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.937 15:08:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.880 nvme0n1 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.880 15:08:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.820 nvme0n1 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:03.820 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU3MTI3ODhmYWRjMjVjMDk3NTBiMjZhZDExMTk4ZDGsKMGX: 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: ]] 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTU5Y2UxNWY3YjRiYWYyOGQ4Y2U4ZmMwYzI3NmU3ODM2u+3t: 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.821 15:08:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.859 nvme0n1 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJlMGQ0MTIwYmU3ZWQxMzRjMWM1M2JiNzIwNjk1YTBhYTdjNWEzYmM5Y2YwZDI4a6aPeg==: 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTY3ZTlmYWI4MjMyYmUwYThiYzgwMjk5ZGUyMzI1MThkEFO0: 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.859 15:08:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.455 nvme0n1 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDY5MTFlYWNkMGEwMjU4MjcwMTJjMmNhY2U1YTFjYmIxMmE2MzQ2MGEwNWM5NTY2YjM1OGM5MjQyZDM4NzlmOT/c0s8=: 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.455 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.715 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.716 15:08:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.656 nvme0n1 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNzVhMzlhMWIzODVhNDdiZmQ4OWUxM2MxNDE4NjM0ZTljNjY4NDM2YzNkMDI3n5b9sA==: 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzU5OGI0N2E0NGQ4NTY1MWU1NWZhY2MzY2M2MDRlMTVhMmEzMDBlMTdlNjQ2ODIy/fJU0g==: 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.656 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.657 request: 00:26:06.657 { 00:26:06.657 "name": "nvme0", 00:26:06.657 "trtype": "rdma", 00:26:06.657 "traddr": "192.168.100.8", 00:26:06.657 "adrfam": "ipv4", 00:26:06.657 "trsvcid": "4420", 00:26:06.657 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:06.657 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:06.657 "prchk_reftag": false, 00:26:06.657 "prchk_guard": false, 00:26:06.657 "hdgst": false, 00:26:06.657 "ddgst": false, 00:26:06.657 "method": "bdev_nvme_attach_controller", 00:26:06.657 "req_id": 1 00:26:06.657 } 00:26:06.657 Got JSON-RPC error response 00:26:06.657 response: 00:26:06.657 { 00:26:06.657 "code": -5, 00:26:06.657 "message": "Input/output error" 00:26:06.657 } 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.657 request: 00:26:06.657 { 00:26:06.657 "name": "nvme0", 00:26:06.657 "trtype": "rdma", 00:26:06.657 "traddr": "192.168.100.8", 00:26:06.657 "adrfam": "ipv4", 00:26:06.657 "trsvcid": "4420", 00:26:06.657 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:06.657 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:06.657 "prchk_reftag": false, 00:26:06.657 "prchk_guard": false, 00:26:06.657 "hdgst": false, 00:26:06.657 "ddgst": false, 00:26:06.657 "dhchap_key": "key2", 00:26:06.657 "method": "bdev_nvme_attach_controller", 00:26:06.657 "req_id": 1 00:26:06.657 } 00:26:06.657 Got JSON-RPC error response 00:26:06.657 response: 00:26:06.657 { 00:26:06.657 "code": -5, 00:26:06.657 "message": "Input/output error" 00:26:06.657 } 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.657 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.916 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.917 request: 00:26:06.917 { 00:26:06.917 "name": "nvme0", 00:26:06.917 "trtype": "rdma", 00:26:06.917 "traddr": "192.168.100.8", 00:26:06.917 "adrfam": "ipv4", 00:26:06.917 "trsvcid": "4420", 00:26:06.917 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:06.917 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:06.917 "prchk_reftag": false, 00:26:06.917 "prchk_guard": false, 00:26:06.917 "hdgst": false, 00:26:06.917 "ddgst": false, 00:26:06.917 "dhchap_key": "key1", 00:26:06.917 "dhchap_ctrlr_key": "ckey2", 00:26:06.917 "method": "bdev_nvme_attach_controller", 00:26:06.917 "req_id": 1 00:26:06.917 } 00:26:06.917 Got JSON-RPC error response 00:26:06.917 response: 00:26:06.917 { 00:26:06.917 "code": -5, 00:26:06.917 "message": "Input/output error" 00:26:06.917 } 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:06.917 rmmod nvme_rdma 00:26:06.917 rmmod nvme_fabrics 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1960311 ']' 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1960311 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1960311 ']' 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1960311 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1960311 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1960311' 00:26:06.917 killing process with pid 1960311 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1960311 00:26:06.917 15:08:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1960311 00:26:07.176 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.176 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:07.176 15:08:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:07.176 15:08:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:07.176 15:08:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:07.176 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:07.177 15:08:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:11.378 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:11.378 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:11.378 15:08:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.1QP /tmp/spdk.key-null.d8O /tmp/spdk.key-sha256.YTj /tmp/spdk.key-sha384.145 /tmp/spdk.key-sha512.hUI /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:26:11.378 15:08:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:14.678 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:26:14.678 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:26:14.678 00:26:14.678 real 1m3.948s 00:26:14.678 user 0m57.844s 00:26:14.678 sys 0m15.701s 00:26:14.678 15:08:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.678 15:08:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.678 ************************************ 00:26:14.678 END TEST nvmf_auth_host 00:26:14.678 ************************************ 00:26:14.678 15:08:30 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:26:14.678 15:08:30 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:26:14.678 15:08:30 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:14.678 15:08:30 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:14.678 15:08:30 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:14.678 15:08:30 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:14.678 15:08:30 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:14.678 15:08:30 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.678 15:08:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:14.678 ************************************ 00:26:14.678 START TEST nvmf_bdevperf 00:26:14.678 ************************************ 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:14.678 * Looking for test storage... 00:26:14.678 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.678 15:08:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.679 15:08:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:22.824 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:26:22.825 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:26:22.825 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:26:22.825 Found net devices under 0000:98:00.0: mlx_0_0 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:26:22.825 Found net devices under 0000:98:00.1: mlx_0_1 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:22.825 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:22.825 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:26:22.825 altname enp152s0f0np0 00:26:22.825 altname ens817f0np0 00:26:22.825 inet 192.168.100.8/24 scope global mlx_0_0 00:26:22.825 valid_lft forever preferred_lft forever 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:22.825 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:22.825 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:26:22.825 altname enp152s0f1np1 00:26:22.825 altname ens817f1np1 00:26:22.825 inet 192.168.100.9/24 scope global mlx_0_1 00:26:22.825 valid_lft forever preferred_lft forever 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:22.825 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:22.826 192.168.100.9' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:22.826 192.168.100.9' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:22.826 192.168.100.9' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1978829 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1978829 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1978829 ']' 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.826 15:08:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.100 [2024-07-15 15:08:38.934158] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:23.100 [2024-07-15 15:08:38.934240] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.100 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.100 [2024-07-15 15:08:39.021817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:23.100 [2024-07-15 15:08:39.116695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.100 [2024-07-15 15:08:39.116758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.100 [2024-07-15 15:08:39.116767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.100 [2024-07-15 15:08:39.116774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.100 [2024-07-15 15:08:39.116780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.100 [2024-07-15 15:08:39.116912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.100 [2024-07-15 15:08:39.117079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.100 [2024-07-15 15:08:39.117079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.675 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.675 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:23.675 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.675 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:23.675 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.935 [2024-07-15 15:08:39.796567] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x148e920/0x1492e10) succeed. 00:26:23.935 [2024-07-15 15:08:39.810657] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x148fec0/0x14d44a0) succeed. 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.935 Malloc0 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.935 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.936 [2024-07-15 15:08:39.967174] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.936 { 00:26:23.936 "params": { 00:26:23.936 "name": "Nvme$subsystem", 00:26:23.936 "trtype": "$TEST_TRANSPORT", 00:26:23.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.936 "adrfam": "ipv4", 00:26:23.936 "trsvcid": "$NVMF_PORT", 00:26:23.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.936 "hdgst": ${hdgst:-false}, 00:26:23.936 "ddgst": ${ddgst:-false} 00:26:23.936 }, 00:26:23.936 "method": "bdev_nvme_attach_controller" 00:26:23.936 } 00:26:23.936 EOF 00:26:23.936 )") 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:23.936 15:08:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:23.936 "params": { 00:26:23.936 "name": "Nvme1", 00:26:23.936 "trtype": "rdma", 00:26:23.936 "traddr": "192.168.100.8", 00:26:23.936 "adrfam": "ipv4", 00:26:23.936 "trsvcid": "4420", 00:26:23.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:23.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:23.936 "hdgst": false, 00:26:23.936 "ddgst": false 00:26:23.936 }, 00:26:23.936 "method": "bdev_nvme_attach_controller" 00:26:23.936 }' 00:26:24.197 [2024-07-15 15:08:40.026441] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:24.197 [2024-07-15 15:08:40.026492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978969 ] 00:26:24.197 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.197 [2024-07-15 15:08:40.092677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.197 [2024-07-15 15:08:40.157471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.457 Running I/O for 1 seconds... 00:26:25.401 00:26:25.401 Latency(us) 00:26:25.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.401 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:25.401 Verification LBA range: start 0x0 length 0x4000 00:26:25.401 Nvme1n1 : 1.01 14380.29 56.17 0.00 0.00 8845.51 3222.19 19442.35 00:26:25.401 =================================================================================================================== 00:26:25.401 Total : 14380.29 56.17 0.00 0.00 8845.51 3222.19 19442.35 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1979307 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:25.662 { 00:26:25.662 "params": { 00:26:25.662 "name": "Nvme$subsystem", 00:26:25.662 "trtype": "$TEST_TRANSPORT", 00:26:25.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:25.662 "adrfam": "ipv4", 00:26:25.662 "trsvcid": "$NVMF_PORT", 00:26:25.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:25.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:25.662 "hdgst": ${hdgst:-false}, 00:26:25.662 "ddgst": ${ddgst:-false} 00:26:25.662 }, 00:26:25.662 "method": "bdev_nvme_attach_controller" 00:26:25.662 } 00:26:25.662 EOF 00:26:25.662 )") 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:25.662 15:08:41 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:25.662 "params": { 00:26:25.662 "name": "Nvme1", 00:26:25.662 "trtype": "rdma", 00:26:25.662 "traddr": "192.168.100.8", 00:26:25.662 "adrfam": "ipv4", 00:26:25.662 "trsvcid": "4420", 00:26:25.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:25.662 "hdgst": false, 00:26:25.662 "ddgst": false 00:26:25.662 }, 00:26:25.662 "method": "bdev_nvme_attach_controller" 00:26:25.662 }' 00:26:25.662 [2024-07-15 15:08:41.548436] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:25.662 [2024-07-15 15:08:41.548491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979307 ] 00:26:25.662 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.662 [2024-07-15 15:08:41.614712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.662 [2024-07-15 15:08:41.677387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.923 Running I/O for 15 seconds... 00:26:28.462 15:08:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1978829 00:26:28.462 15:08:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:29.845 [2024-07-15 15:08:45.541528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.845 [2024-07-15 15:08:45.541743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.845 [2024-07-15 15:08:45.541752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.541987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.541995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.846 [2024-07-15 15:08:45.542336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183f00 00:26:29.846 [2024-07-15 15:08:45.542354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183f00 00:26:29.846 [2024-07-15 15:08:45.542372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.846 [2024-07-15 15:08:45.542381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.542992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.542999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.847 [2024-07-15 15:08:45.543009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183f00 00:26:29.847 [2024-07-15 15:08:45.543016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:26:29.848 [2024-07-15 15:08:45.543581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.848 [2024-07-15 15:08:45.543590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:26:29.849 [2024-07-15 15:08:45.543596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.543606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:26:29.849 [2024-07-15 15:08:45.543613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.543622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:26:29.849 [2024-07-15 15:08:45.543630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.543639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:26:29.849 [2024-07-15 15:08:45.543646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.543655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:26:29.849 [2024-07-15 15:08:45.552378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:7ab2a000 sqhd:52b0 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.554609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.849 [2024-07-15 15:08:45.554628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.849 [2024-07-15 15:08:45.554636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102016 len:8 PRP1 0x0 PRP2 0x0 00:26:29.849 [2024-07-15 15:08:45.554645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.554679] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:26:29.849 [2024-07-15 15:08:45.554711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.849 [2024-07-15 15:08:45.554720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.554729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.849 [2024-07-15 15:08:45.554736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.554743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.849 [2024-07-15 15:08:45.554750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.554758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.849 [2024-07-15 15:08:45.554765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.849 [2024-07-15 15:08:45.574928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:29.849 [2024-07-15 15:08:45.574970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.849 [2024-07-15 15:08:45.574991] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:29.849 [2024-07-15 15:08:45.579177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.849 [2024-07-15 15:08:45.582874] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:29.849 [2024-07-15 15:08:45.582894] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:29.849 [2024-07-15 15:08:45.582900] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:30.791 [2024-07-15 15:08:46.587221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:30.791 [2024-07-15 15:08:46.587286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:30.791 [2024-07-15 15:08:46.587931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:30.791 [2024-07-15 15:08:46.587954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:30.791 [2024-07-15 15:08:46.587976] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:30.791 [2024-07-15 15:08:46.588967] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:30.791 [2024-07-15 15:08:46.591655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:30.791 [2024-07-15 15:08:46.602786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:30.791 [2024-07-15 15:08:46.606470] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:30.791 [2024-07-15 15:08:46.606492] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:30.791 [2024-07-15 15:08:46.606498] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:31.735 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1978829 Killed "${NVMF_APP[@]}" "$@" 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1980526 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1980526 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1980526 ']' 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.735 15:08:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:31.735 [2024-07-15 15:08:47.568872] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:31.735 [2024-07-15 15:08:47.568921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.735 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.735 [2024-07-15 15:08:47.611113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:31.735 [2024-07-15 15:08:47.611134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:31.735 [2024-07-15 15:08:47.611355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:31.735 [2024-07-15 15:08:47.611364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:31.735 [2024-07-15 15:08:47.611372] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:31.735 [2024-07-15 15:08:47.613247] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:31.735 [2024-07-15 15:08:47.614891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.735 [2024-07-15 15:08:47.627075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:31.735 [2024-07-15 15:08:47.630655] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:31.735 [2024-07-15 15:08:47.630672] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:31.735 [2024-07-15 15:08:47.630678] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:31.735 [2024-07-15 15:08:47.650800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:31.735 [2024-07-15 15:08:47.704873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.735 [2024-07-15 15:08:47.704907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.735 [2024-07-15 15:08:47.704916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.735 [2024-07-15 15:08:47.704920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.735 [2024-07-15 15:08:47.704925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.735 [2024-07-15 15:08:47.705031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.735 [2024-07-15 15:08:47.705190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.735 [2024-07-15 15:08:47.705192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.310 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.310 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:32.310 15:08:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.310 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:32.310 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 [2024-07-15 15:08:48.418407] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf69920/0xf6de10) succeed. 00:26:32.571 [2024-07-15 15:08:48.428299] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf6aec0/0xfaf4a0) succeed. 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 Malloc0 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 [2024-07-15 15:08:48.559675] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.571 15:08:48 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1979307 00:26:32.832 [2024-07-15 15:08:48.634975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:32.832 [2024-07-15 15:08:48.634998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.832 [2024-07-15 15:08:48.635216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.832 [2024-07-15 15:08:48.635225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.832 [2024-07-15 15:08:48.635242] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:32.832 [2024-07-15 15:08:48.637114] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:32.832 [2024-07-15 15:08:48.638758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.832 [2024-07-15 15:08:48.650923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.832 [2024-07-15 15:08:48.709987] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:40.974 00:26:40.974 Latency(us) 00:26:40.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.974 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:40.974 Verification LBA range: start 0x0 length 0x4000 00:26:40.974 Nvme1n1 : 15.01 12154.79 47.48 7922.93 0.00 6348.86 344.75 1069547.52 00:26:40.974 =================================================================================================================== 00:26:40.974 Total : 12154.79 47.48 7922.93 0.00 6348.86 344.75 1069547.52 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:41.236 rmmod nvme_rdma 00:26:41.236 rmmod nvme_fabrics 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1980526 ']' 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1980526 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1980526 ']' 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1980526 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1980526 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1980526' 00:26:41.236 killing process with pid 1980526 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1980526 00:26:41.236 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1980526 00:26:41.498 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.498 15:08:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:41.498 00:26:41.498 real 0m26.760s 00:26:41.498 user 1m4.402s 00:26:41.498 sys 0m7.051s 00:26:41.498 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:41.498 15:08:57 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:41.498 ************************************ 00:26:41.498 END TEST nvmf_bdevperf 00:26:41.498 ************************************ 00:26:41.498 15:08:57 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:26:41.498 15:08:57 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:26:41.498 15:08:57 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:41.498 15:08:57 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.498 15:08:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:41.498 ************************************ 00:26:41.498 START TEST nvmf_target_disconnect 00:26:41.498 ************************************ 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:26:41.498 * Looking for test storage... 00:26:41.498 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.498 15:08:57 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.499 15:08:57 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:26:49.652 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:26:49.652 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:26:49.652 Found net devices under 0000:98:00.0: mlx_0_0 00:26:49.652 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:26:49.653 Found net devices under 0000:98:00.1: mlx_0_1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:49.653 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:49.653 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:26:49.653 altname enp152s0f0np0 00:26:49.653 altname ens817f0np0 00:26:49.653 inet 192.168.100.8/24 scope global mlx_0_0 00:26:49.653 valid_lft forever preferred_lft forever 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:49.653 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:49.653 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:26:49.653 altname enp152s0f1np1 00:26:49.653 altname ens817f1np1 00:26:49.653 inet 192.168.100.9/24 scope global mlx_0_1 00:26:49.653 valid_lft forever preferred_lft forever 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:49.653 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:49.654 192.168.100.9' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:49.654 192.168.100.9' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:49.654 192.168.100.9' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:49.654 ************************************ 00:26:49.654 START TEST nvmf_target_disconnect_tc1 00:26:49.654 ************************************ 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:26:49.654 15:09:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:49.654 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.654 [2024-07-15 15:09:05.505964] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:49.654 [2024-07-15 15:09:05.506005] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:49.654 [2024-07-15 15:09:05.506014] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:26:50.598 [2024-07-15 15:09:06.510466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:50.598 [2024-07-15 15:09:06.510488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:50.598 [2024-07-15 15:09:06.510497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:26:50.598 [2024-07-15 15:09:06.510522] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:50.598 [2024-07-15 15:09:06.510530] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:50.598 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:26:50.598 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:50.598 Initializing NVMe Controllers 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:50.598 00:26:50.598 real 0m1.150s 00:26:50.598 user 0m0.990s 00:26:50.598 sys 0m0.140s 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:50.598 ************************************ 00:26:50.598 END TEST nvmf_target_disconnect_tc1 00:26:50.598 ************************************ 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:50.598 ************************************ 00:26:50.598 START TEST nvmf_target_disconnect_tc2 00:26:50.598 ************************************ 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:26:50.598 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1986927 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1986927 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1986927 ']' 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.599 15:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.860 [2024-07-15 15:09:06.667760] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:50.860 [2024-07-15 15:09:06.667817] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.860 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.860 [2024-07-15 15:09:06.756854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.860 [2024-07-15 15:09:06.850794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.860 [2024-07-15 15:09:06.850851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.860 [2024-07-15 15:09:06.850860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.860 [2024-07-15 15:09:06.850867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.860 [2024-07-15 15:09:06.850873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.860 [2024-07-15 15:09:06.851061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:50.860 [2024-07-15 15:09:06.851189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:50.860 [2024-07-15 15:09:06.851319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:50.860 [2024-07-15 15:09:06.851510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:51.432 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.432 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:51.432 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:51.432 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:51.432 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.691 Malloc0 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.691 [2024-07-15 15:09:07.568396] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a15550/0x1a210b0) succeed. 00:26:51.691 [2024-07-15 15:09:07.582638] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a16b90/0x1a62740) succeed. 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.691 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.961 [2024-07-15 15:09:07.765298] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.961 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.962 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.962 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1987167 00:26:51.962 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:51.962 15:09:07 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:51.962 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.919 15:09:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1986927 00:26:53.919 15:09:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Write completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Write completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Write completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Write completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Write completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Write completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Write completed with error (sct=0, sc=8) 00:26:55.304 starting I/O failed 00:26:55.304 Read completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Read completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Read completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Read completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Read completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 Write completed with error (sct=0, sc=8) 00:26:55.305 starting I/O failed 00:26:55.305 [2024-07-15 15:09:10.980968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:55.875 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1986927 Killed "${NVMF_APP[@]}" "$@" 00:26:55.875 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1988382 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1988382 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1988382 ']' 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.876 15:09:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:55.876 [2024-07-15 15:09:11.846677] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:55.876 [2024-07-15 15:09:11.846730] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.876 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.876 [2024-07-15 15:09:11.930257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 [2024-07-15 15:09:11.984284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.137 starting I/O failed 00:26:56.137 Write completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.137 Read completed with error (sct=0, sc=8) 00:26:56.137 starting I/O failed 00:26:56.138 Write completed with error (sct=0, sc=8) 00:26:56.138 [2024-07-15 15:09:11.984313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsstarting I/O failed 00:26:56.138 at runtime. 00:26:56.138 Write completed with error (sct=0, sc=8) 00:26:56.138 [2024-07-15 15:09:11.984320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is thestarting I/O failed 00:26:56.138 only 00:26:56.138 Write completed with error (sct=0, sc=8) 00:26:56.138 [2024-07-15 15:09:11.984326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.138 starting I/O failed 00:26:56.138 Read completed with error (sct=0, sc=8) 00:26:56.138 [2024-07-15 15:09:11.984330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.138 starting I/O failed 00:26:56.138 Write completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 Write completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 Read completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 Read completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 Read completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 Write completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 Read completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 Write completed with error (sct=0, sc=8) 00:26:56.138 starting I/O failed 00:26:56.138 [2024-07-15 15:09:11.984478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:56.138 [2024-07-15 15:09:11.984730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:56.138 [2024-07-15 15:09:11.984847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:56.138 [2024-07-15 15:09:11.984848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:56.138 [2024-07-15 15:09:11.986686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.138 [2024-07-15 15:09:11.989170] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:56.138 [2024-07-15 15:09:11.989182] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:56.138 [2024-07-15 15:09:11.989187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.709 Malloc0 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.709 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.709 [2024-07-15 15:09:12.716848] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23f5550/0x24010b0) succeed. 00:26:56.709 [2024-07-15 15:09:12.727103] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23f6b90/0x2442740) succeed. 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.970 [2024-07-15 15:09:12.862698] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.970 15:09:12 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1987167 00:26:56.970 [2024-07-15 15:09:12.993572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 15:09:13.002998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.970 [2024-07-15 15:09:13.003040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.970 [2024-07-15 15:09:13.003053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.970 [2024-07-15 15:09:13.003058] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.971 [2024-07-15 15:09:13.003067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:56.971 [2024-07-15 15:09:13.012455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 15:09:13.023081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.971 [2024-07-15 15:09:13.023110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.971 [2024-07-15 15:09:13.023120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.971 [2024-07-15 15:09:13.023125] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.971 [2024-07-15 15:09:13.023130] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:56.971 [2024-07-15 15:09:13.032350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.971 qpair failed and we were unable to recover it. 00:26:57.233 [2024-07-15 15:09:13.042975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.233 [2024-07-15 15:09:13.043007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.233 [2024-07-15 15:09:13.043018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.233 [2024-07-15 15:09:13.043023] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.233 [2024-07-15 15:09:13.043027] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.233 [2024-07-15 15:09:13.052516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.233 qpair failed and we were unable to recover it. 00:26:57.233 [2024-07-15 15:09:13.062818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.233 [2024-07-15 15:09:13.062848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.233 [2024-07-15 15:09:13.062859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.233 [2024-07-15 15:09:13.062863] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.233 [2024-07-15 15:09:13.062868] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.233 [2024-07-15 15:09:13.072558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.233 qpair failed and we were unable to recover it. 00:26:57.233 [2024-07-15 15:09:13.082869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.233 [2024-07-15 15:09:13.082902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.233 [2024-07-15 15:09:13.082912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.233 [2024-07-15 15:09:13.082917] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.233 [2024-07-15 15:09:13.082921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.233 [2024-07-15 15:09:13.092627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.233 qpair failed and we were unable to recover it. 00:26:57.233 [2024-07-15 15:09:13.103239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.233 [2024-07-15 15:09:13.103269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.233 [2024-07-15 15:09:13.103279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.233 [2024-07-15 15:09:13.103284] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.233 [2024-07-15 15:09:13.103289] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.233 [2024-07-15 15:09:13.112707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.233 qpair failed and we were unable to recover it. 00:26:57.233 [2024-07-15 15:09:13.123256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.233 [2024-07-15 15:09:13.123286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.233 [2024-07-15 15:09:13.123296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.233 [2024-07-15 15:09:13.123300] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.233 [2024-07-15 15:09:13.123305] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.233 [2024-07-15 15:09:13.132682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.233 qpair failed and we were unable to recover it. 00:26:57.233 [2024-07-15 15:09:13.143038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.233 [2024-07-15 15:09:13.143067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.233 [2024-07-15 15:09:13.143077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.233 [2024-07-15 15:09:13.143082] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.233 [2024-07-15 15:09:13.143087] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.152887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.234 [2024-07-15 15:09:13.163271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.234 [2024-07-15 15:09:13.163302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.234 [2024-07-15 15:09:13.163321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.234 [2024-07-15 15:09:13.163327] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.234 [2024-07-15 15:09:13.163332] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.173062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.234 [2024-07-15 15:09:13.183622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.234 [2024-07-15 15:09:13.183648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.234 [2024-07-15 15:09:13.183659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.234 [2024-07-15 15:09:13.183667] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.234 [2024-07-15 15:09:13.183672] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.192843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.234 [2024-07-15 15:09:13.203598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.234 [2024-07-15 15:09:13.203634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.234 [2024-07-15 15:09:13.203644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.234 [2024-07-15 15:09:13.203650] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.234 [2024-07-15 15:09:13.203654] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.213120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.234 [2024-07-15 15:09:13.223421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.234 [2024-07-15 15:09:13.223451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.234 [2024-07-15 15:09:13.223470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.234 [2024-07-15 15:09:13.223476] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.234 [2024-07-15 15:09:13.223481] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.233042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.234 [2024-07-15 15:09:13.243582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.234 [2024-07-15 15:09:13.243615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.234 [2024-07-15 15:09:13.243625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.234 [2024-07-15 15:09:13.243630] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.234 [2024-07-15 15:09:13.243635] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.253126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.234 [2024-07-15 15:09:13.263806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.234 [2024-07-15 15:09:13.263841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.234 [2024-07-15 15:09:13.263860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.234 [2024-07-15 15:09:13.263866] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.234 [2024-07-15 15:09:13.263871] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.273403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.234 [2024-07-15 15:09:13.283852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.234 [2024-07-15 15:09:13.283884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.234 [2024-07-15 15:09:13.283895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.234 [2024-07-15 15:09:13.283900] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.234 [2024-07-15 15:09:13.283904] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.234 [2024-07-15 15:09:13.293114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.234 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.303655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.303682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.303691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.303696] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.303701] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.313125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.323921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.323953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.323962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.323967] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.323971] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.333322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.344042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.344069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.344079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.344085] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.344091] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.353355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.364101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.364132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.364147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.364152] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.364156] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.373770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.383979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.384007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.384016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.384021] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.384026] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.393575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.404274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.404304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.404324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.404329] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.404335] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.413735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.424166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.424193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.424203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.424208] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.424213] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.433608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.443748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.443782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.443801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.443808] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.443816] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.453744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.464009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.464039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.464050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.464055] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.464059] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.473790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.484595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.484628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.484637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.484642] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.496 [2024-07-15 15:09:13.484646] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.496 [2024-07-15 15:09:13.493901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.496 qpair failed and we were unable to recover it. 00:26:57.496 [2024-07-15 15:09:13.504464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.496 [2024-07-15 15:09:13.504495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.496 [2024-07-15 15:09:13.504504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.496 [2024-07-15 15:09:13.504509] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.497 [2024-07-15 15:09:13.504513] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.497 [2024-07-15 15:09:13.513697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.497 qpair failed and we were unable to recover it. 00:26:57.497 [2024-07-15 15:09:13.523969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.497 [2024-07-15 15:09:13.524004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.497 [2024-07-15 15:09:13.524014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.497 [2024-07-15 15:09:13.524018] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.497 [2024-07-15 15:09:13.524023] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.497 [2024-07-15 15:09:13.533907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.497 qpair failed and we were unable to recover it. 00:26:57.497 [2024-07-15 15:09:13.544205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.497 [2024-07-15 15:09:13.544235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.497 [2024-07-15 15:09:13.544245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.497 [2024-07-15 15:09:13.544250] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.497 [2024-07-15 15:09:13.544254] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.497 [2024-07-15 15:09:13.553746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.497 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.564818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.564849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.564858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.564862] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.564867] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.574276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.584812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.584838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.584847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.584852] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.584856] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.594034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.604856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.604888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.604897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.604902] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.604906] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.614066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.624650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.624678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.624687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.624695] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.624699] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.634276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.644893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.644925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.644934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.644939] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.644943] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.654292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.665240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.665276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.665285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.665289] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.665294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.673909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.684697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.684729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.684738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.684743] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.684747] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.694445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.704924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.704950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.704959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.704964] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.704968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.714466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.725168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.758 [2024-07-15 15:09:13.725197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.758 [2024-07-15 15:09:13.725206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.758 [2024-07-15 15:09:13.725210] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.758 [2024-07-15 15:09:13.725215] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.758 [2024-07-15 15:09:13.734287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.758 qpair failed and we were unable to recover it. 00:26:57.758 [2024-07-15 15:09:13.745207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.759 [2024-07-15 15:09:13.745242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.759 [2024-07-15 15:09:13.745251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.759 [2024-07-15 15:09:13.745256] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.759 [2024-07-15 15:09:13.745260] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.759 [2024-07-15 15:09:13.754728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.759 qpair failed and we were unable to recover it. 00:26:57.759 [2024-07-15 15:09:13.765104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.759 [2024-07-15 15:09:13.765135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.759 [2024-07-15 15:09:13.765144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.759 [2024-07-15 15:09:13.765149] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.759 [2024-07-15 15:09:13.765153] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.759 [2024-07-15 15:09:13.774867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.759 qpair failed and we were unable to recover it. 00:26:57.759 [2024-07-15 15:09:13.784899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.759 [2024-07-15 15:09:13.784926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.759 [2024-07-15 15:09:13.784936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.759 [2024-07-15 15:09:13.784940] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.759 [2024-07-15 15:09:13.784944] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.759 [2024-07-15 15:09:13.794846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.759 qpair failed and we were unable to recover it. 00:26:57.759 [2024-07-15 15:09:13.805499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.759 [2024-07-15 15:09:13.805525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.759 [2024-07-15 15:09:13.805537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.759 [2024-07-15 15:09:13.805541] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.759 [2024-07-15 15:09:13.805545] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:57.759 [2024-07-15 15:09:13.814779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:57.759 qpair failed and we were unable to recover it. 00:26:58.020 [2024-07-15 15:09:13.825701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.020 [2024-07-15 15:09:13.825730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.020 [2024-07-15 15:09:13.825740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.020 [2024-07-15 15:09:13.825744] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.020 [2024-07-15 15:09:13.825749] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.834821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.845461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.845488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.845497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.845502] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.845507] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.854997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.865384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.865412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.865421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.865427] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.865431] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.874986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.885799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.885835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.885854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.885860] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.885869] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.895077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.905727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.905765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.905775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.905780] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.905785] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.915160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.925840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.925869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.925879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.925883] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.925888] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.935002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.944766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.944792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.944802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.944807] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.944812] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.954895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.965923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.965956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.965965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.965970] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.965974] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.975243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:13.985892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:13.985924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:13.985933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:13.985938] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:13.985942] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:13.995257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:14.005937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:14.005963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:14.005973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:14.005977] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:14.005981] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:14.015446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:14.025764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:14.025791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:14.025800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:14.025805] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:14.025809] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:14.035327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.021 [2024-07-15 15:09:14.046107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.021 [2024-07-15 15:09:14.046138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.021 [2024-07-15 15:09:14.046151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.021 [2024-07-15 15:09:14.046156] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.021 [2024-07-15 15:09:14.046160] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.021 [2024-07-15 15:09:14.055927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.021 qpair failed and we were unable to recover it. 00:26:58.022 [2024-07-15 15:09:14.066242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.022 [2024-07-15 15:09:14.066276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.022 [2024-07-15 15:09:14.066285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.022 [2024-07-15 15:09:14.066292] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.022 [2024-07-15 15:09:14.066297] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.022 [2024-07-15 15:09:14.075443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.022 qpair failed and we were unable to recover it. 00:26:58.282 [2024-07-15 15:09:14.086503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.282 [2024-07-15 15:09:14.086531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.282 [2024-07-15 15:09:14.086540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.282 [2024-07-15 15:09:14.086545] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.282 [2024-07-15 15:09:14.086549] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.282 [2024-07-15 15:09:14.095698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-07-15 15:09:14.105925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.282 [2024-07-15 15:09:14.105951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.282 [2024-07-15 15:09:14.105961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.282 [2024-07-15 15:09:14.105966] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.282 [2024-07-15 15:09:14.105971] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.282 [2024-07-15 15:09:14.115770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.282 qpair failed and we were unable to recover it. 00:26:58.282 [2024-07-15 15:09:14.126245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.282 [2024-07-15 15:09:14.126276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.282 [2024-07-15 15:09:14.126285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.282 [2024-07-15 15:09:14.126290] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.282 [2024-07-15 15:09:14.126294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.135813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.146442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.146468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.146477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.146482] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.146487] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.155845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.166686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.166716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.166735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.166741] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.166746] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.175965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.186100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.186128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.186138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.186143] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.186148] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.195876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.206860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.206896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.206915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.206921] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.206925] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.216172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.226627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.226661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.226671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.226676] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.226680] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.235885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.246589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.246621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.246634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.246639] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.246643] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.255998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.266426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.266453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.266463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.266468] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.266472] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.276067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.286821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.286851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.286861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.286865] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.286870] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.296152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.306613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.306640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.306649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.306654] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.306658] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.316279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.283 [2024-07-15 15:09:14.326933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.283 [2024-07-15 15:09:14.326965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.283 [2024-07-15 15:09:14.326974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.283 [2024-07-15 15:09:14.326979] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.283 [2024-07-15 15:09:14.326986] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.283 [2024-07-15 15:09:14.336273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.283 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 15:09:14.346458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.544 [2024-07-15 15:09:14.346487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.544 [2024-07-15 15:09:14.346497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.544 [2024-07-15 15:09:14.346502] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.544 [2024-07-15 15:09:14.346506] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.544 [2024-07-15 15:09:14.356384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 15:09:14.367246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.544 [2024-07-15 15:09:14.367279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.544 [2024-07-15 15:09:14.367299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.544 [2024-07-15 15:09:14.367305] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.544 [2024-07-15 15:09:14.367310] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.544 [2024-07-15 15:09:14.376553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 15:09:14.386787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.544 [2024-07-15 15:09:14.386815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.386825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.386830] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.386835] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.396457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.407316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.407349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.407368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.407374] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.407379] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.416675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.426936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.426963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.426983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.426989] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.426993] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.436550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.446713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.446739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.446749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.446754] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.446759] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.456654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.467328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.467356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.467366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.467370] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.467375] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.476829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.487355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.487385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.487394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.487399] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.487403] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.496848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.507039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.507065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.507074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.507082] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.507086] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.516931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.527570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.527604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.527624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.527629] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.527634] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.536979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.547645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.547677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.547687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.547692] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.547697] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.557119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.567489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.567517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.567527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.567532] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.567536] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.577071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 15:09:14.587273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.545 [2024-07-15 15:09:14.587298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.545 [2024-07-15 15:09:14.587308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.545 [2024-07-15 15:09:14.587312] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.545 [2024-07-15 15:09:14.587317] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.545 [2024-07-15 15:09:14.596905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.805 [2024-07-15 15:09:14.607582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.805 [2024-07-15 15:09:14.607613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.805 [2024-07-15 15:09:14.607623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.607628] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.607632] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.617299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.627771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.627804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.627813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.627818] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.627822] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.637312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.647791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.647816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.647825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.647830] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.647835] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.657384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.667327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.667356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.667365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.667370] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.667374] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.677108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.687741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.687770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.687785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.687790] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.687794] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.697657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.707875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.707902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.707912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.707917] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.707921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.717305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.727765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.727793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.727803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.727808] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.727812] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.737162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.747546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.747572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.747582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.747586] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.747590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.757243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.767623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.767654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.767663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.767668] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.767676] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.777373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.788040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.788073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.788093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.788098] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.788103] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.797386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.808028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.808056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.808066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.808071] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.808076] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.817614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.827869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.827899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.827908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.827913] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.827918] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.837562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:58.806 [2024-07-15 15:09:14.848226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.806 [2024-07-15 15:09:14.848267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.806 [2024-07-15 15:09:14.848287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.806 [2024-07-15 15:09:14.848293] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.806 [2024-07-15 15:09:14.848297] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.806 [2024-07-15 15:09:14.857815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.806 qpair failed and we were unable to recover it. 00:26:59.067 [2024-07-15 15:09:14.868118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.067 [2024-07-15 15:09:14.868149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.067 [2024-07-15 15:09:14.868160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.067 [2024-07-15 15:09:14.868165] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.067 [2024-07-15 15:09:14.868169] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.067 [2024-07-15 15:09:14.877530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.067 qpair failed and we were unable to recover it. 00:26:59.067 [2024-07-15 15:09:14.888187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.067 [2024-07-15 15:09:14.888216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.067 [2024-07-15 15:09:14.888226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.067 [2024-07-15 15:09:14.888234] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.067 [2024-07-15 15:09:14.888238] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.067 [2024-07-15 15:09:14.897744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.067 qpair failed and we were unable to recover it. 00:26:59.067 [2024-07-15 15:09:14.908021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.067 [2024-07-15 15:09:14.908049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.067 [2024-07-15 15:09:14.908058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.067 [2024-07-15 15:09:14.908063] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.067 [2024-07-15 15:09:14.908067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.067 [2024-07-15 15:09:14.917814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.067 qpair failed and we were unable to recover it. 00:26:59.067 [2024-07-15 15:09:14.928491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.067 [2024-07-15 15:09:14.928524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.067 [2024-07-15 15:09:14.928543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.067 [2024-07-15 15:09:14.928549] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.067 [2024-07-15 15:09:14.928554] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.067 [2024-07-15 15:09:14.938021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.067 qpair failed and we were unable to recover it. 00:26:59.067 [2024-07-15 15:09:14.948487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.067 [2024-07-15 15:09:14.948520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.067 [2024-07-15 15:09:14.948540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.067 [2024-07-15 15:09:14.948549] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.067 [2024-07-15 15:09:14.948554] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.067 [2024-07-15 15:09:14.957965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.067 qpair failed and we were unable to recover it. 00:26:59.067 [2024-07-15 15:09:14.968528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.067 [2024-07-15 15:09:14.968556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.067 [2024-07-15 15:09:14.968576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.067 [2024-07-15 15:09:14.968581] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.067 [2024-07-15 15:09:14.968586] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.067 [2024-07-15 15:09:14.978049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.067 qpair failed and we were unable to recover it. 00:26:59.067 [2024-07-15 15:09:14.988183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.067 [2024-07-15 15:09:14.988211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.067 [2024-07-15 15:09:14.988221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:14.988226] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.068 [2024-07-15 15:09:14.988240] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.068 [2024-07-15 15:09:14.998149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.068 qpair failed and we were unable to recover it. 00:26:59.068 [2024-07-15 15:09:15.007966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.068 [2024-07-15 15:09:15.007995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.068 [2024-07-15 15:09:15.008004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:15.008009] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.068 [2024-07-15 15:09:15.008013] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.068 [2024-07-15 15:09:15.018061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.068 qpair failed and we were unable to recover it. 00:26:59.068 [2024-07-15 15:09:15.028586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.068 [2024-07-15 15:09:15.028613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.068 [2024-07-15 15:09:15.028622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:15.028627] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.068 [2024-07-15 15:09:15.028631] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.068 [2024-07-15 15:09:15.038073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.068 qpair failed and we were unable to recover it. 00:26:59.068 [2024-07-15 15:09:15.048759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.068 [2024-07-15 15:09:15.048790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.068 [2024-07-15 15:09:15.048800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:15.048804] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.068 [2024-07-15 15:09:15.048809] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.068 [2024-07-15 15:09:15.058190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.068 qpair failed and we were unable to recover it. 00:26:59.068 [2024-07-15 15:09:15.068614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.068 [2024-07-15 15:09:15.068643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.068 [2024-07-15 15:09:15.068652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:15.068657] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.068 [2024-07-15 15:09:15.068662] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.068 [2024-07-15 15:09:15.078146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.068 qpair failed and we were unable to recover it. 00:26:59.068 [2024-07-15 15:09:15.088703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.068 [2024-07-15 15:09:15.088732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.068 [2024-07-15 15:09:15.088742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:15.088746] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.068 [2024-07-15 15:09:15.088751] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.068 [2024-07-15 15:09:15.098312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.068 qpair failed and we were unable to recover it. 00:26:59.068 [2024-07-15 15:09:15.108976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.068 [2024-07-15 15:09:15.109003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.068 [2024-07-15 15:09:15.109013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:15.109018] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.068 [2024-07-15 15:09:15.109022] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.068 [2024-07-15 15:09:15.118423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.068 qpair failed and we were unable to recover it. 00:26:59.068 [2024-07-15 15:09:15.128832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.068 [2024-07-15 15:09:15.128863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.068 [2024-07-15 15:09:15.128874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.068 [2024-07-15 15:09:15.128880] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.329 [2024-07-15 15:09:15.128885] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.329 [2024-07-15 15:09:15.138335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.329 qpair failed and we were unable to recover it. 00:26:59.329 [2024-07-15 15:09:15.148617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.329 [2024-07-15 15:09:15.148643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.329 [2024-07-15 15:09:15.148653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.329 [2024-07-15 15:09:15.148657] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.329 [2024-07-15 15:09:15.148662] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.329 [2024-07-15 15:09:15.158462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.329 qpair failed and we were unable to recover it. 00:26:59.329 [2024-07-15 15:09:15.169119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.329 [2024-07-15 15:09:15.169150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.329 [2024-07-15 15:09:15.169159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.329 [2024-07-15 15:09:15.169164] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.329 [2024-07-15 15:09:15.169168] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.329 [2024-07-15 15:09:15.178569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.329 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.190902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.190934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.190954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.190959] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.190964] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.198489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.209124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.209156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.209166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.209171] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.209179] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.218643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.229043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.229075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.229095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.229100] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.229105] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.238590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.249251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.249278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.249289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.249293] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.249298] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.258803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.269505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.269535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.269545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.269550] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.269554] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.278838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.289660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.289685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.289694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.289699] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.289703] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.298851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.308551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.308579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.308589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.308593] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.308597] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.318965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.329631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.329667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.329676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.329680] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.329685] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.339224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.349723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.349753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.349763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.349767] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.349771] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.358920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.369815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.369840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.369849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.369854] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.369858] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.330 [2024-07-15 15:09:15.378894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.330 qpair failed and we were unable to recover it. 00:26:59.330 [2024-07-15 15:09:15.388829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.330 [2024-07-15 15:09:15.388855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.330 [2024-07-15 15:09:15.388864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.330 [2024-07-15 15:09:15.388871] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.330 [2024-07-15 15:09:15.388875] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.399220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.409672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.409700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.409710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.409714] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.409719] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.419274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.429989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.430023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.430032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.430037] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.430041] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.439254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.450070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.450102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.450111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.450115] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.450120] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.459413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.469778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.469804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.469813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.469818] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.469822] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.479334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.490255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.490286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.490295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.490300] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.490304] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.499428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.509376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.509401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.509410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.509414] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.509418] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.519243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.530103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.530129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.530138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.530142] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.530147] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.539493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.549889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.549917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.549926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.549930] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.549935] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.559565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.570298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.570326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.570338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.570343] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.570347] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.579606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.590178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.590211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.590220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.590225] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.590233] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.599691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.591 qpair failed and we were unable to recover it. 00:26:59.591 [2024-07-15 15:09:15.610378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.591 [2024-07-15 15:09:15.610409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.591 [2024-07-15 15:09:15.610419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.591 [2024-07-15 15:09:15.610423] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.591 [2024-07-15 15:09:15.610427] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.591 [2024-07-15 15:09:15.619759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.592 qpair failed and we were unable to recover it. 00:26:59.592 [2024-07-15 15:09:15.629981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.592 [2024-07-15 15:09:15.630007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.592 [2024-07-15 15:09:15.630017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.592 [2024-07-15 15:09:15.630022] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.592 [2024-07-15 15:09:15.630026] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.592 [2024-07-15 15:09:15.639716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.592 qpair failed and we were unable to recover it. 00:26:59.592 [2024-07-15 15:09:15.650425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.592 [2024-07-15 15:09:15.650456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.592 [2024-07-15 15:09:15.650465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.592 [2024-07-15 15:09:15.650470] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.592 [2024-07-15 15:09:15.650477] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.852 [2024-07-15 15:09:15.660004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.852 qpair failed and we were unable to recover it. 00:26:59.852 [2024-07-15 15:09:15.670658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.852 [2024-07-15 15:09:15.670685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.852 [2024-07-15 15:09:15.670694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.852 [2024-07-15 15:09:15.670699] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.852 [2024-07-15 15:09:15.670703] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.852 [2024-07-15 15:09:15.679909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.852 qpair failed and we were unable to recover it. 00:26:59.852 [2024-07-15 15:09:15.690645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.852 [2024-07-15 15:09:15.690672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.852 [2024-07-15 15:09:15.690681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.852 [2024-07-15 15:09:15.690686] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.852 [2024-07-15 15:09:15.690690] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.852 [2024-07-15 15:09:15.700058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.852 qpair failed and we were unable to recover it. 00:26:59.852 [2024-07-15 15:09:15.710351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.852 [2024-07-15 15:09:15.710378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.852 [2024-07-15 15:09:15.710387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.852 [2024-07-15 15:09:15.710391] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.852 [2024-07-15 15:09:15.710395] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.852 [2024-07-15 15:09:15.719844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.730670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.730699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.730708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.730713] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.730717] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.740183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.750790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.750821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.750830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.750835] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.750839] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.760295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.770286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.770318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.770327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.770332] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.770338] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.780160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.790461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.790487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.790497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.790501] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.790505] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.800441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.811007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.811036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.811055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.811061] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.811065] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.820119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.830809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.830837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.830847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.830855] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.830859] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.840301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.851037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.851068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.851079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.851085] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.851090] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.860311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.870844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.870873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.870883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.870887] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.870891] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.880468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.891288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.891322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.891341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.891348] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.891352] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.853 [2024-07-15 15:09:15.900384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.853 qpair failed and we were unable to recover it. 00:26:59.853 [2024-07-15 15:09:15.910920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.853 [2024-07-15 15:09:15.910948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.853 [2024-07-15 15:09:15.910959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.853 [2024-07-15 15:09:15.910964] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.853 [2024-07-15 15:09:15.910968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:15.920454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:15.931431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:15.931457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:15.931467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:15.931472] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:15.931476] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:15.940621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:15.950829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:15.950855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:15.950865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:15.950869] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:15.950874] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:15.960549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:15.971216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:15.971248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:15.971257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:15.971262] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:15.971267] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:15.981121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:15.991763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:15.991797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:15.991816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:15.991822] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:15.991827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:16.000655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:16.011567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:16.011596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:16.011613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:16.011618] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:16.011622] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:16.020843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:16.031201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:16.031236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:16.031256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:16.031261] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:16.031266] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:16.040831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:16.051283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:16.051310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:16.051321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:16.051326] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:16.051330] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:16.060983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:16.071796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:16.071825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:16.071835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:16.071840] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:16.071844] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:16.081137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:16.091724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:16.091751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:16.091760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.115 [2024-07-15 15:09:16.091765] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.115 [2024-07-15 15:09:16.091773] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.115 [2024-07-15 15:09:16.101289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.115 qpair failed and we were unable to recover it. 00:27:00.115 [2024-07-15 15:09:16.111621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.115 [2024-07-15 15:09:16.111647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.115 [2024-07-15 15:09:16.111657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.116 [2024-07-15 15:09:16.111661] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.116 [2024-07-15 15:09:16.111666] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.116 [2024-07-15 15:09:16.121090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.116 qpair failed and we were unable to recover it. 00:27:00.116 [2024-07-15 15:09:16.131726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.116 [2024-07-15 15:09:16.131756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.116 [2024-07-15 15:09:16.131766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.116 [2024-07-15 15:09:16.131770] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.116 [2024-07-15 15:09:16.131775] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.116 [2024-07-15 15:09:16.141169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.116 qpair failed and we were unable to recover it. 00:27:00.116 [2024-07-15 15:09:16.151813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.116 [2024-07-15 15:09:16.151845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.116 [2024-07-15 15:09:16.151854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.116 [2024-07-15 15:09:16.151859] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.116 [2024-07-15 15:09:16.151863] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.116 [2024-07-15 15:09:16.161384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.116 qpair failed and we were unable to recover it. 00:27:00.116 [2024-07-15 15:09:16.171833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.116 [2024-07-15 15:09:16.171860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.116 [2024-07-15 15:09:16.171870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.116 [2024-07-15 15:09:16.171875] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.116 [2024-07-15 15:09:16.171879] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.377 [2024-07-15 15:09:16.181179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.377 qpair failed and we were unable to recover it. 00:27:00.377 [2024-07-15 15:09:16.191609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.377 [2024-07-15 15:09:16.191637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.377 [2024-07-15 15:09:16.191646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.377 [2024-07-15 15:09:16.191651] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.377 [2024-07-15 15:09:16.191655] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.377 [2024-07-15 15:09:16.201532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.377 qpair failed and we were unable to recover it. 00:27:00.377 [2024-07-15 15:09:16.212390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.377 [2024-07-15 15:09:16.212425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.377 [2024-07-15 15:09:16.212435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.377 [2024-07-15 15:09:16.212439] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.377 [2024-07-15 15:09:16.212444] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.377 [2024-07-15 15:09:16.221332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.377 qpair failed and we were unable to recover it. 00:27:00.377 [2024-07-15 15:09:16.232373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.377 [2024-07-15 15:09:16.232400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.377 [2024-07-15 15:09:16.232409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.377 [2024-07-15 15:09:16.232414] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.377 [2024-07-15 15:09:16.232418] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.377 [2024-07-15 15:09:16.241708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.377 qpair failed and we were unable to recover it. 00:27:00.377 [2024-07-15 15:09:16.252503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.377 [2024-07-15 15:09:16.252535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.377 [2024-07-15 15:09:16.252544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.377 [2024-07-15 15:09:16.252548] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.377 [2024-07-15 15:09:16.252553] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.377 [2024-07-15 15:09:16.261417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.377 qpair failed and we were unable to recover it. 00:27:00.377 [2024-07-15 15:09:16.272051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.377 [2024-07-15 15:09:16.272076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.377 [2024-07-15 15:09:16.272085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.377 [2024-07-15 15:09:16.272092] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.377 [2024-07-15 15:09:16.272097] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.377 [2024-07-15 15:09:16.281615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.377 qpair failed and we were unable to recover it. 00:27:00.377 [2024-07-15 15:09:16.292268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.377 [2024-07-15 15:09:16.292301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.377 [2024-07-15 15:09:16.292310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.377 [2024-07-15 15:09:16.292315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.377 [2024-07-15 15:09:16.292319] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.377 [2024-07-15 15:09:16.301761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.377 qpair failed and we were unable to recover it. 00:27:00.377 [2024-07-15 15:09:16.312512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-15 15:09:16.312538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-15 15:09:16.312548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-15 15:09:16.312553] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-15 15:09:16.312557] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.378 [2024-07-15 15:09:16.321654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-15 15:09:16.332635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-15 15:09:16.332669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-15 15:09:16.332678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-15 15:09:16.332683] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-15 15:09:16.332687] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.378 [2024-07-15 15:09:16.342191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-15 15:09:16.352234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-15 15:09:16.352260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-15 15:09:16.352269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-15 15:09:16.352274] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-15 15:09:16.352278] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.378 [2024-07-15 15:09:16.362005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-15 15:09:16.372587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-15 15:09:16.372617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-15 15:09:16.372626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-15 15:09:16.372631] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-15 15:09:16.372635] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.378 [2024-07-15 15:09:16.382141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-15 15:09:16.392621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-15 15:09:16.392649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-15 15:09:16.392658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-15 15:09:16.392662] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-15 15:09:16.392667] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.378 [2024-07-15 15:09:16.402152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-15 15:09:16.412653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-15 15:09:16.412686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-15 15:09:16.412695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-15 15:09:16.412700] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-15 15:09:16.412704] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.378 [2024-07-15 15:09:16.422383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-15 15:09:16.432467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-15 15:09:16.432494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-15 15:09:16.432503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-15 15:09:16.432508] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-15 15:09:16.432512] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.638 [2024-07-15 15:09:16.442258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.638 qpair failed and we were unable to recover it. 00:27:00.638 [2024-07-15 15:09:16.452647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.638 [2024-07-15 15:09:16.452675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.638 [2024-07-15 15:09:16.452687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.638 [2024-07-15 15:09:16.452692] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.638 [2024-07-15 15:09:16.452696] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.638 [2024-07-15 15:09:16.462346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.638 qpair failed and we were unable to recover it. 00:27:00.638 [2024-07-15 15:09:16.473125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.638 [2024-07-15 15:09:16.473154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.638 [2024-07-15 15:09:16.473163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.638 [2024-07-15 15:09:16.473168] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.638 [2024-07-15 15:09:16.473172] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.638 [2024-07-15 15:09:16.482011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.638 qpair failed and we were unable to recover it. 00:27:00.638 [2024-07-15 15:09:16.492662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.638 [2024-07-15 15:09:16.492688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.638 [2024-07-15 15:09:16.492697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.638 [2024-07-15 15:09:16.492702] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.638 [2024-07-15 15:09:16.492706] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.638 [2024-07-15 15:09:16.502215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.638 qpair failed and we were unable to recover it. 00:27:00.638 [2024-07-15 15:09:16.512575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.638 [2024-07-15 15:09:16.512602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.638 [2024-07-15 15:09:16.512611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.638 [2024-07-15 15:09:16.512616] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.638 [2024-07-15 15:09:16.512621] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.638 [2024-07-15 15:09:16.522218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.638 qpair failed and we were unable to recover it. 00:27:00.638 [2024-07-15 15:09:16.533179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.638 [2024-07-15 15:09:16.533215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.638 [2024-07-15 15:09:16.533224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.638 [2024-07-15 15:09:16.533231] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.638 [2024-07-15 15:09:16.533239] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.638 [2024-07-15 15:09:16.542473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.638 qpair failed and we were unable to recover it. 00:27:00.638 [2024-07-15 15:09:16.553042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.638 [2024-07-15 15:09:16.553071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.638 [2024-07-15 15:09:16.553081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.638 [2024-07-15 15:09:16.553085] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.638 [2024-07-15 15:09:16.553089] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.638 [2024-07-15 15:09:16.562359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.638 qpair failed and we were unable to recover it. 00:27:00.638 [2024-07-15 15:09:16.573240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.638 [2024-07-15 15:09:16.573269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.638 [2024-07-15 15:09:16.573279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-15 15:09:16.573283] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-15 15:09:16.573287] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.639 [2024-07-15 15:09:16.582386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-15 15:09:16.592971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-15 15:09:16.592998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-15 15:09:16.593007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-15 15:09:16.593012] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-15 15:09:16.593016] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.639 [2024-07-15 15:09:16.602336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-15 15:09:16.613105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-15 15:09:16.613134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-15 15:09:16.613143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-15 15:09:16.613148] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-15 15:09:16.613152] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.639 [2024-07-15 15:09:16.623034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-15 15:09:16.633380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-15 15:09:16.633410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-15 15:09:16.633419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-15 15:09:16.633424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-15 15:09:16.633428] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.639 [2024-07-15 15:09:16.642539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-15 15:09:16.653254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-15 15:09:16.653288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-15 15:09:16.653297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-15 15:09:16.653301] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-15 15:09:16.653305] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.639 [2024-07-15 15:09:16.662794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-15 15:09:16.673049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-15 15:09:16.673074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-15 15:09:16.673083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-15 15:09:16.673087] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-15 15:09:16.673091] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.639 [2024-07-15 15:09:16.682796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-15 15:09:16.693410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-15 15:09:16.693442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-15 15:09:16.693451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-15 15:09:16.693455] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-15 15:09:16.693459] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.899 [2024-07-15 15:09:16.702715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.899 qpair failed and we were unable to recover it. 00:27:00.899 [2024-07-15 15:09:16.713550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.899 [2024-07-15 15:09:16.713575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.899 [2024-07-15 15:09:16.713584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.899 [2024-07-15 15:09:16.713592] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.899 [2024-07-15 15:09:16.713596] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.899 [2024-07-15 15:09:16.722983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.899 qpair failed and we were unable to recover it. 00:27:00.899 [2024-07-15 15:09:16.733628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.899 [2024-07-15 15:09:16.733664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.899 [2024-07-15 15:09:16.733673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.899 [2024-07-15 15:09:16.733678] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.899 [2024-07-15 15:09:16.733683] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.899 [2024-07-15 15:09:16.743194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.899 qpair failed and we were unable to recover it. 00:27:00.899 [2024-07-15 15:09:16.753285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.899 [2024-07-15 15:09:16.753312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.899 [2024-07-15 15:09:16.753321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.899 [2024-07-15 15:09:16.753325] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.899 [2024-07-15 15:09:16.753330] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.899 [2024-07-15 15:09:16.763126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.899 qpair failed and we were unable to recover it. 00:27:00.899 [2024-07-15 15:09:16.773889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.773919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.773928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.773933] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.773938] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.783311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.793813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.793842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.793852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.793857] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.793861] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.803140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.813741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.813770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.813779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.813784] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.813788] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.823070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.833571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.833598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.833608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.833613] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.833618] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.843213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.853951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.853986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.853995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.854000] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.854004] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.863273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.873906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.873937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.873946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.873951] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.873955] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.883377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.894177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.894205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.894217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.894222] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.894226] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.903554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.913854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.913883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.913903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.913908] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.913913] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.923170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.933431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.933460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.933471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.933476] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.933480] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:00.900 [2024-07-15 15:09:16.943609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-15 15:09:16.954343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-15 15:09:16.954371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-15 15:09:16.954385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-15 15:09:16.954390] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-15 15:09:16.954395] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:16.963539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:16.974140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:16.974165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:16.974175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:16.974180] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:16.974187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:16.983822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:16.993926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:16.993955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:16.993964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:16.993969] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:16.993973] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.003721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.014477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.014506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.014516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.014520] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.014525] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.023678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.034578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.034611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.034621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.034625] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.034630] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.043882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.054519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.054545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.054554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.054559] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.054563] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.063748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.074361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.074390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.074400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.074405] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.074409] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.083781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.094665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.094692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.094701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.094706] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.094710] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.104039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.114818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.114847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.114857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.114861] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.114866] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.123989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.134528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.134557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.134566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.134571] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.134575] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.143971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.154405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.154431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.154443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.154448] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.154452] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.164271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.174920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.174946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.162 [2024-07-15 15:09:17.174955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.162 [2024-07-15 15:09:17.174960] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.162 [2024-07-15 15:09:17.174964] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.162 [2024-07-15 15:09:17.184418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.162 qpair failed and we were unable to recover it. 00:27:01.162 [2024-07-15 15:09:17.194888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.162 [2024-07-15 15:09:17.194919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.163 [2024-07-15 15:09:17.194928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.163 [2024-07-15 15:09:17.194933] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.163 [2024-07-15 15:09:17.194937] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.163 [2024-07-15 15:09:17.204562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.163 qpair failed and we were unable to recover it. 00:27:01.163 [2024-07-15 15:09:17.214927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.163 [2024-07-15 15:09:17.214955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.163 [2024-07-15 15:09:17.214964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.163 [2024-07-15 15:09:17.214969] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.163 [2024-07-15 15:09:17.214973] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.224045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.234835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.234864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.234873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.234878] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.234882] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.244441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.255161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.255191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.255200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.255205] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.255209] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.264831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.274999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.275024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.275033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.275037] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.275042] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.284614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.294839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.294873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.294883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.294888] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.294892] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.304417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.314827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.314853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.314862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.314867] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.314871] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.324663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.335158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.335186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.335198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.335203] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.335207] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.344748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.355265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.355297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.355306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.355310] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.355315] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.364665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.375347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.375371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.375381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.375385] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.375390] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.384776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.395055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.395083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.395092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.395097] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.395101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.404903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.414856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.414886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.414896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.414900] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.414907] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.424500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.435354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.424 [2024-07-15 15:09:17.435383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.424 [2024-07-15 15:09:17.435392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.424 [2024-07-15 15:09:17.435397] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.424 [2024-07-15 15:09:17.435401] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.424 [2024-07-15 15:09:17.444877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.424 qpair failed and we were unable to recover it. 00:27:01.424 [2024-07-15 15:09:17.455368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.425 [2024-07-15 15:09:17.455397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.425 [2024-07-15 15:09:17.455407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.425 [2024-07-15 15:09:17.455411] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.425 [2024-07-15 15:09:17.455416] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.425 [2024-07-15 15:09:17.465167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.425 qpair failed and we were unable to recover it. 00:27:01.425 [2024-07-15 15:09:17.475258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.425 [2024-07-15 15:09:17.475285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.425 [2024-07-15 15:09:17.475294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.425 [2024-07-15 15:09:17.475299] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.425 [2024-07-15 15:09:17.475304] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.425 [2024-07-15 15:09:17.484905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.425 qpair failed and we were unable to recover it. 00:27:01.685 [2024-07-15 15:09:17.495766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.685 [2024-07-15 15:09:17.495800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.685 [2024-07-15 15:09:17.495809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.685 [2024-07-15 15:09:17.495814] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.685 [2024-07-15 15:09:17.495818] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.685 [2024-07-15 15:09:17.505041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.685 qpair failed and we were unable to recover it. 00:27:01.685 [2024-07-15 15:09:17.515783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.515812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.515821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.515826] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.515830] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.525027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.535586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.535612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.535621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.535626] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.535630] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.545449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.555584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.555611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.555620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.555625] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.555629] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.565316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.575688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.575720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.575729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.575734] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.575738] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.585208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.595865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.595893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.595905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.595909] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.595914] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.604986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.616047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.616079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.616088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.616093] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.616097] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.625379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.635699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.635725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.635734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.635739] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.635744] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.645333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.655685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.655723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.655733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.655737] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.655742] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.665431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.676011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.676038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.676047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.676052] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.676056] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.685347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.696115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.696142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.696151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.696156] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.696161] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.705640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.716154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.716182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.716202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.716207] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.716212] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.725638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.686 [2024-07-15 15:09:17.736207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.686 [2024-07-15 15:09:17.736247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.686 [2024-07-15 15:09:17.736258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.686 [2024-07-15 15:09:17.736263] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.686 [2024-07-15 15:09:17.736267] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.686 [2024-07-15 15:09:17.745530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.686 qpair failed and we were unable to recover it. 00:27:01.948 [2024-07-15 15:09:17.756388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.948 [2024-07-15 15:09:17.756415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.948 [2024-07-15 15:09:17.756424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.948 [2024-07-15 15:09:17.756429] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.948 [2024-07-15 15:09:17.756434] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.948 [2024-07-15 15:09:17.765673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.948 qpair failed and we were unable to recover it. 00:27:01.948 [2024-07-15 15:09:17.776284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.948 [2024-07-15 15:09:17.776311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.948 [2024-07-15 15:09:17.776334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.948 [2024-07-15 15:09:17.776339] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.948 [2024-07-15 15:09:17.776344] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.948 [2024-07-15 15:09:17.786076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.948 qpair failed and we were unable to recover it. 00:27:01.948 [2024-07-15 15:09:17.796159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.796189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.796200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.796205] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.796209] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.805962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.816620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.816655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.816665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.816670] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.816674] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.825812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.836399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.836423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.836433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.836438] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.836442] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.845743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.856637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.856668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.856678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.856682] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.856689] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.866268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.876328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.876354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.876363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.876368] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.876372] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.886101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.896723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.896752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.896762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.896767] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.896771] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.906408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.916858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.916892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.916901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.916906] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.916910] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.926204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.936875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.936905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.936915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.936919] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.936924] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.946137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.956632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.956658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.956668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.956672] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.956677] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.966319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.977203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.977241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.977261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.977266] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.977271] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:17.986306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:01.949 [2024-07-15 15:09:17.996972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.949 [2024-07-15 15:09:17.996998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.949 [2024-07-15 15:09:17.997008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.949 [2024-07-15 15:09:17.997013] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.949 [2024-07-15 15:09:17.997018] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:01.949 [2024-07-15 15:09:18.006262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.949 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-15 15:09:18.016847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.210 [2024-07-15 15:09:18.016874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.210 [2024-07-15 15:09:18.016883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.210 [2024-07-15 15:09:18.016888] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.210 [2024-07-15 15:09:18.016893] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:02.210 [2024-07-15 15:09:18.026449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-15 15:09:18.036798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:02.210 [2024-07-15 15:09:18.036826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:02.210 [2024-07-15 15:09:18.036838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:02.210 [2024-07-15 15:09:18.036843] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:02.210 [2024-07-15 15:09:18.036847] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:02.210 [2024-07-15 15:09:18.046460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:02.210 qpair failed and we were unable to recover it. 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Read completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 Write completed with error (sct=0, sc=8) 00:27:03.162 starting I/O failed 00:27:03.162 [2024-07-15 15:09:19.052002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:03.162 [2024-07-15 15:09:19.058739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.162 [2024-07-15 15:09:19.058773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.162 [2024-07-15 15:09:19.058790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.162 [2024-07-15 15:09:19.058798] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.162 [2024-07-15 15:09:19.058805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bbd80 00:27:03.162 [2024-07-15 15:09:19.069610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:03.162 qpair failed and we were unable to recover it. 00:27:03.162 [2024-07-15 15:09:19.080037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:03.162 [2024-07-15 15:09:19.080069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:03.162 [2024-07-15 15:09:19.080083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:03.162 [2024-07-15 15:09:19.080094] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:03.162 [2024-07-15 15:09:19.080101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bbd80 00:27:03.162 [2024-07-15 15:09:19.089448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:03.162 qpair failed and we were unable to recover it. 00:27:04.103 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Read completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 Write completed with error (sct=0, sc=8) 00:27:04.104 starting I/O failed 00:27:04.104 [2024-07-15 15:09:20.095261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.104 [2024-07-15 15:09:20.102652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.104 [2024-07-15 15:09:20.102688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.104 [2024-07-15 15:09:20.102707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.104 [2024-07-15 15:09:20.102715] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.104 [2024-07-15 15:09:20.102723] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:04.104 [2024-07-15 15:09:20.112334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.104 qpair failed and we were unable to recover it. 00:27:04.104 [2024-07-15 15:09:20.122099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.104 [2024-07-15 15:09:20.122133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.104 [2024-07-15 15:09:20.122148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.104 [2024-07-15 15:09:20.122154] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.104 [2024-07-15 15:09:20.122164] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:04.104 [2024-07-15 15:09:20.132482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.104 qpair failed and we were unable to recover it. 00:27:04.104 [2024-07-15 15:09:20.132651] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:04.104 A controller has encountered a failure and is being reset. 00:27:04.104 [2024-07-15 15:09:20.132776] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:04.365 [2024-07-15 15:09:20.172645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:04.365 Controller properly reset. 00:27:04.365 Initializing NVMe Controllers 00:27:04.365 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.365 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:04.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:04.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:04.365 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:04.365 Initialization complete. Launching workers. 00:27:04.365 Starting thread on core 1 00:27:04.365 Starting thread on core 2 00:27:04.365 Starting thread on core 3 00:27:04.365 Starting thread on core 0 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:04.365 00:27:04.365 real 0m13.649s 00:27:04.365 user 0m27.727s 00:27:04.365 sys 0m2.168s 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.365 ************************************ 00:27:04.365 END TEST nvmf_target_disconnect_tc2 00:27:04.365 ************************************ 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:04.365 ************************************ 00:27:04.365 START TEST nvmf_target_disconnect_tc3 00:27:04.365 ************************************ 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc3 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1990013 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:27:04.365 15:09:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:27:04.365 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.909 15:09:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1988382 00:27:06.909 15:09:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Read completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Read completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Read completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Read completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Read completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Read completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.480 starting I/O failed 00:27:07.480 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Write completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 Read completed with error (sct=0, sc=8) 00:27:07.481 starting I/O failed 00:27:07.481 [2024-07-15 15:09:23.521197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:08.422 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1988382 Killed "${NVMF_APP[@]}" "$@" 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1990883 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1990883 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1990883 ']' 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.422 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.423 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.423 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.423 15:09:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:08.423 [2024-07-15 15:09:24.407978] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:08.423 [2024-07-15 15:09:24.408040] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.423 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.683 [2024-07-15 15:09:24.492436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Write completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 Read completed with error (sct=0, sc=8) 00:27:08.683 starting I/O failed 00:27:08.683 [2024-07-15 15:09:24.526660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:08.683 [2024-07-15 15:09:24.529023] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:08.683 [2024-07-15 15:09:24.529042] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:08.683 [2024-07-15 15:09:24.529047] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:08.683 [2024-07-15 15:09:24.546789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.683 [2024-07-15 15:09:24.546817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.683 [2024-07-15 15:09:24.546822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.683 [2024-07-15 15:09:24.546827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.683 [2024-07-15 15:09:24.546831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.683 [2024-07-15 15:09:24.546978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:08.683 [2024-07-15 15:09:24.547108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:08.683 [2024-07-15 15:09:24.547276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:08.683 [2024-07-15 15:09:24.547466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:09.253 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.254 Malloc0 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.254 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.254 [2024-07-15 15:09:25.265197] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5c3550/0x5cf0b0) succeed. 00:27:09.254 [2024-07-15 15:09:25.275347] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5c4b90/0x610740) succeed. 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.515 [2024-07-15 15:09:25.403915] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.515 15:09:25 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1990013 00:27:09.515 [2024-07-15 15:09:25.533271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:09.515 qpair failed and we were unable to recover it. 00:27:09.515 [2024-07-15 15:09:25.535803] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:09.515 [2024-07-15 15:09:25.535813] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:09.515 [2024-07-15 15:09:25.535818] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:10.901 [2024-07-15 15:09:26.540039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:10.901 qpair failed and we were unable to recover it. 00:27:10.901 [2024-07-15 15:09:26.542331] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:10.901 [2024-07-15 15:09:26.542341] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:10.901 [2024-07-15 15:09:26.542345] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:11.842 [2024-07-15 15:09:27.546695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:11.842 qpair failed and we were unable to recover it. 00:27:11.842 [2024-07-15 15:09:27.548954] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:11.842 [2024-07-15 15:09:27.548963] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:11.842 [2024-07-15 15:09:27.548968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:12.783 [2024-07-15 15:09:28.552997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.783 qpair failed and we were unable to recover it. 00:27:12.783 [2024-07-15 15:09:28.555177] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:12.783 [2024-07-15 15:09:28.555187] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:12.783 [2024-07-15 15:09:28.555191] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:13.724 [2024-07-15 15:09:29.559444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.724 qpair failed and we were unable to recover it. 00:27:13.724 [2024-07-15 15:09:29.561503] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:13.724 [2024-07-15 15:09:29.561513] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:13.724 [2024-07-15 15:09:29.561518] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:14.666 [2024-07-15 15:09:30.565454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:14.666 qpair failed and we were unable to recover it. 00:27:14.666 [2024-07-15 15:09:30.567625] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:14.666 [2024-07-15 15:09:30.567634] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:14.666 [2024-07-15 15:09:30.567638] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:15.608 [2024-07-15 15:09:31.572057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.608 qpair failed and we were unable to recover it. 00:27:15.608 [2024-07-15 15:09:31.575104] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:15.608 [2024-07-15 15:09:31.575165] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:15.608 [2024-07-15 15:09:31.575186] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:16.549 [2024-07-15 15:09:32.579548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.549 qpair failed and we were unable to recover it. 00:27:16.549 [2024-07-15 15:09:32.582030] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:16.549 [2024-07-15 15:09:32.582044] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:16.549 [2024-07-15 15:09:32.582050] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:17.930 [2024-07-15 15:09:33.586220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.930 qpair failed and we were unable to recover it. 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Write completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 Read completed with error (sct=0, sc=8) 00:27:18.868 starting I/O failed 00:27:18.868 [2024-07-15 15:09:34.592078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:18.868 [2024-07-15 15:09:34.594518] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:18.868 [2024-07-15 15:09:34.594535] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:18.868 [2024-07-15 15:09:34.594541] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002be180 00:27:19.807 [2024-07-15 15:09:35.598826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:19.807 qpair failed and we were unable to recover it. 00:27:19.807 [2024-07-15 15:09:35.600973] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:19.807 [2024-07-15 15:09:35.600983] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:19.807 [2024-07-15 15:09:35.600987] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002be180 00:27:20.744 [2024-07-15 15:09:36.605364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:20.744 qpair failed and we were unable to recover it. 00:27:20.744 [2024-07-15 15:09:36.605535] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:20.744 A controller has encountered a failure and is being reset. 00:27:20.744 Resorting to new failover address 192.168.100.9 00:27:20.744 [2024-07-15 15:09:36.605650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:20.744 [2024-07-15 15:09:36.605708] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:20.744 [2024-07-15 15:09:36.608203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:20.744 Controller properly reset. 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Write completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.756 Read completed with error (sct=0, sc=8) 00:27:21.756 starting I/O failed 00:27:21.757 Read completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Read completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Read completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Write completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Read completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Read completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 Read completed with error (sct=0, sc=8) 00:27:21.757 starting I/O failed 00:27:21.757 [2024-07-15 15:09:37.664936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:21.757 Initializing NVMe Controllers 00:27:21.757 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.757 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.757 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:21.757 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:21.757 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:21.757 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:21.757 Initialization complete. Launching workers. 00:27:21.757 Starting thread on core 1 00:27:21.757 Starting thread on core 2 00:27:21.757 Starting thread on core 3 00:27:21.757 Starting thread on core 0 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:27:21.757 00:27:21.757 real 0m17.393s 00:27:21.757 user 1m0.531s 00:27:21.757 sys 0m3.339s 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:21.757 ************************************ 00:27:21.757 END TEST nvmf_target_disconnect_tc3 00:27:21.757 ************************************ 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.757 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:21.757 rmmod nvme_rdma 00:27:21.757 rmmod nvme_fabrics 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1990883 ']' 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1990883 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1990883 ']' 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1990883 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1990883 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1990883' 00:27:22.017 killing process with pid 1990883 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1990883 00:27:22.017 15:09:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1990883 00:27:22.017 15:09:38 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.017 15:09:38 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:22.017 00:27:22.017 real 0m40.633s 00:27:22.017 user 2m25.361s 00:27:22.017 sys 0m11.741s 00:27:22.017 15:09:38 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.017 15:09:38 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:22.017 ************************************ 00:27:22.017 END TEST nvmf_target_disconnect 00:27:22.017 ************************************ 00:27:22.278 15:09:38 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:27:22.278 15:09:38 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:22.278 15:09:38 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.278 15:09:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.278 15:09:38 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:22.278 00:27:22.278 real 19m45.213s 00:27:22.278 user 46m51.727s 00:27:22.278 sys 5m34.939s 00:27:22.278 15:09:38 nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.278 15:09:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.278 ************************************ 00:27:22.278 END TEST nvmf_rdma 00:27:22.278 ************************************ 00:27:22.278 15:09:38 -- common/autotest_common.sh@1142 -- # return 0 00:27:22.278 15:09:38 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:22.278 15:09:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:22.278 15:09:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.278 15:09:38 -- common/autotest_common.sh@10 -- # set +x 00:27:22.278 ************************************ 00:27:22.278 START TEST spdkcli_nvmf_rdma 00:27:22.279 ************************************ 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:22.279 * Looking for test storage... 00:27:22.279 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.279 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.540 15:09:38 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1993672 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1993672 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@829 -- # '[' -z 1993672 ']' 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.541 15:09:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.541 [2024-07-15 15:09:38.431172] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:27:22.541 [2024-07-15 15:09:38.431259] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1993672 ] 00:27:22.541 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.541 [2024-07-15 15:09:38.502585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:22.541 [2024-07-15 15:09:38.578271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.541 [2024-07-15 15:09:38.578306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # return 0 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:27:23.483 15:09:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:27:31.624 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:27:31.624 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:27:31.624 Found net devices under 0000:98:00.0: mlx_0_0 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:31.624 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:27:31.625 Found net devices under 0000:98:00.1: mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:31.625 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:31.625 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:27:31.625 altname enp152s0f0np0 00:27:31.625 altname ens817f0np0 00:27:31.625 inet 192.168.100.8/24 scope global mlx_0_0 00:27:31.625 valid_lft forever preferred_lft forever 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:31.625 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:31.625 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:27:31.625 altname enp152s0f1np1 00:27:31.625 altname ens817f1np1 00:27:31.625 inet 192.168.100.9/24 scope global mlx_0_1 00:27:31.625 valid_lft forever preferred_lft forever 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:31.625 192.168.100.9' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:31.625 192.168.100.9' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:31.625 192.168.100.9' 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:27:31.625 15:09:46 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:31.625 15:09:47 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:31.625 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:31.625 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:31.625 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:31.625 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:31.625 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:31.625 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:31.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:31.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:31.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:31.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:31.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:31.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:31.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:31.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:31.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:31.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:31.626 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:31.626 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:31.626 ' 00:27:33.544 [2024-07-15 15:09:49.401449] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1663ef0/0x14eae40) succeed. 00:27:33.544 [2024-07-15 15:09:49.416084] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16653a0/0x15d5ec0) succeed. 00:27:34.931 [2024-07-15 15:09:50.714659] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:27:37.483 [2024-07-15 15:09:53.125935] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:27:39.391 [2024-07-15 15:09:55.208488] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:27:40.768 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:40.768 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:40.768 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:40.768 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:40.768 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:40.768 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:40.768 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:40.768 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:40.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:40.768 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:40.769 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:40.769 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:40.769 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:40.769 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:27:41.027 15:09:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:41.286 15:09:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:41.286 15:09:57 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:41.286 15:09:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:41.286 15:09:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.286 15:09:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:41.545 15:09:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:41.545 15:09:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:41.545 15:09:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:41.545 15:09:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:41.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:41.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:41.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:41.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:27:41.545 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:27:41.545 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:41.545 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:41.545 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:41.545 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:41.545 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:41.545 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:41.545 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:41.545 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:41.545 ' 00:27:46.843 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:46.843 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:46.843 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:46.843 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:46.843 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:27:46.843 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:27:46.843 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:46.843 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:46.843 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:46.843 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:46.843 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:46.843 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:46.843 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:46.843 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1993672 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # '[' -z 1993672 ']' 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # kill -0 1993672 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # uname 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1993672 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1993672' 00:27:46.843 killing process with pid 1993672 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # kill 1993672 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # wait 1993672 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.843 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:46.844 rmmod nvme_rdma 00:27:46.844 rmmod nvme_fabrics 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:46.844 00:27:46.844 real 0m24.353s 00:27:46.844 user 0m52.718s 00:27:46.844 sys 0m6.609s 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.844 15:10:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:46.844 ************************************ 00:27:46.844 END TEST spdkcli_nvmf_rdma 00:27:46.844 ************************************ 00:27:46.844 15:10:02 -- common/autotest_common.sh@1142 -- # return 0 00:27:46.844 15:10:02 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:46.844 15:10:02 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:46.844 15:10:02 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:46.844 15:10:02 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:46.844 15:10:02 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:46.844 15:10:02 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:46.844 15:10:02 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:46.844 15:10:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.844 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:46.844 15:10:02 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:46.844 15:10:02 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:46.844 15:10:02 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:46.844 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 INFO: APP EXITING 00:27:53.424 INFO: killing all VMs 00:27:53.424 INFO: killing vhost app 00:27:53.424 INFO: EXIT DONE 00:27:56.720 Waiting for block devices as requested 00:27:56.720 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:56.720 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:56.720 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:56.720 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:56.720 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:56.720 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:56.980 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:56.980 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:56.980 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:57.241 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:57.241 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:57.241 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:57.241 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:57.500 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:57.500 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:57.500 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:57.760 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:01.968 Cleaning 00:28:01.968 Removing: /var/run/dpdk/spdk0/config 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:01.968 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:01.968 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:01.968 Removing: /var/run/dpdk/spdk1/config 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:01.968 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:01.968 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:01.968 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:01.968 Removing: /var/run/dpdk/spdk2/config 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:01.968 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:01.968 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:01.968 Removing: /var/run/dpdk/spdk3/config 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:01.968 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:01.968 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:01.968 Removing: /var/run/dpdk/spdk4/config 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:01.968 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:01.968 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:01.968 Removing: /dev/shm/bdevperf_trace.pid1752145 00:28:01.968 Removing: /dev/shm/bdevperf_trace.pid1888565 00:28:01.968 Removing: /dev/shm/bdev_svc_trace.1 00:28:01.968 Removing: /dev/shm/nvmf_trace.0 00:28:01.968 Removing: /dev/shm/spdk_tgt_trace.pid1614139 00:28:01.968 Removing: /var/run/dpdk/spdk0 00:28:01.968 Removing: /var/run/dpdk/spdk1 00:28:01.968 Removing: /var/run/dpdk/spdk2 00:28:01.968 Removing: /var/run/dpdk/spdk3 00:28:01.968 Removing: /var/run/dpdk/spdk4 00:28:01.968 Removing: /var/run/dpdk/spdk_pid1612659 00:28:01.968 Removing: /var/run/dpdk/spdk_pid1614139 00:28:01.968 Removing: /var/run/dpdk/spdk_pid1614770 00:28:01.968 Removing: /var/run/dpdk/spdk_pid1616071 00:28:01.968 Removing: /var/run/dpdk/spdk_pid1616176 00:28:01.968 Removing: /var/run/dpdk/spdk_pid1617909 00:28:01.968 Removing: /var/run/dpdk/spdk_pid1617999 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1618399 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1623521 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1624265 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1624558 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1624827 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1625159 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1625551 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1625906 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1626221 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1626463 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1627702 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1630963 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1631324 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1631704 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1632019 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1632394 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1632599 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1633104 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1633150 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1633483 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1633817 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1633881 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1634190 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1634630 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1634982 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1635329 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1635497 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1635701 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1635836 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1636185 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1636443 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1636638 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1636924 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1637271 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1637626 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1637975 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1638162 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1638377 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1638714 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1639066 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1639423 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1639676 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1639869 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1640162 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1640509 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1640867 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1641217 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1641423 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1641634 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1641988 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1642335 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1647236 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1702795 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1708016 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1720246 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1726860 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1731894 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1732903 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1741440 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1752145 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1752602 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1757671 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1764943 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1768022 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1780551 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1812235 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1816921 00:28:01.969 Removing: /var/run/dpdk/spdk_pid1886227 00:28:02.229 Removing: /var/run/dpdk/spdk_pid1887190 00:28:02.229 Removing: /var/run/dpdk/spdk_pid1888565 00:28:02.229 Removing: /var/run/dpdk/spdk_pid1893786 00:28:02.229 Removing: /var/run/dpdk/spdk_pid1902837 00:28:02.229 Removing: /var/run/dpdk/spdk_pid1903849 00:28:02.229 Removing: /var/run/dpdk/spdk_pid1904894 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1906041 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1906518 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1912052 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1912152 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1917272 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1917942 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1918612 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1919332 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1919544 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1925350 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1926075 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1931400 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1935268 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1942037 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1954011 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1954014 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1978969 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1979307 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1986772 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1987167 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1990013 00:28:02.230 Removing: /var/run/dpdk/spdk_pid1993672 00:28:02.230 Clean 00:28:02.230 15:10:18 -- common/autotest_common.sh@1451 -- # return 0 00:28:02.230 15:10:18 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:02.230 15:10:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.230 15:10:18 -- common/autotest_common.sh@10 -- # set +x 00:28:02.230 15:10:18 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:02.230 15:10:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.230 15:10:18 -- common/autotest_common.sh@10 -- # set +x 00:28:02.491 15:10:18 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:02.491 15:10:18 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:28:02.491 15:10:18 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:28:02.491 15:10:18 -- spdk/autotest.sh@391 -- # hash lcov 00:28:02.491 15:10:18 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:02.491 15:10:18 -- spdk/autotest.sh@393 -- # hostname 00:28:02.491 15:10:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:28:02.491 geninfo: WARNING: invalid characters removed from testname! 00:28:29.129 15:10:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:29.129 15:10:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:29.129 15:10:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:30.514 15:10:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:32.429 15:10:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:33.813 15:10:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:35.244 15:10:51 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:35.244 15:10:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:35.244 15:10:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:35.244 15:10:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.244 15:10:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.244 15:10:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.244 15:10:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.244 15:10:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.244 15:10:51 -- paths/export.sh@5 -- $ export PATH 00:28:35.244 15:10:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.244 15:10:51 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:28:35.244 15:10:51 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:35.244 15:10:51 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721049051.XXXXXX 00:28:35.244 15:10:51 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721049051.yWpnzQ 00:28:35.244 15:10:51 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:35.244 15:10:51 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:35.244 15:10:51 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:28:35.244 15:10:51 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:35.244 15:10:51 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:35.245 15:10:51 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:35.245 15:10:51 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:35.245 15:10:51 -- common/autotest_common.sh@10 -- $ set +x 00:28:35.245 15:10:51 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:28:35.245 15:10:51 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:35.245 15:10:51 -- pm/common@17 -- $ local monitor 00:28:35.245 15:10:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:35.245 15:10:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:35.245 15:10:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:35.245 15:10:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:35.245 15:10:51 -- pm/common@21 -- $ date +%s 00:28:35.245 15:10:51 -- pm/common@25 -- $ sleep 1 00:28:35.245 15:10:51 -- pm/common@21 -- $ date +%s 00:28:35.245 15:10:51 -- pm/common@21 -- $ date +%s 00:28:35.245 15:10:51 -- pm/common@21 -- $ date +%s 00:28:35.245 15:10:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049051 00:28:35.245 15:10:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049051 00:28:35.245 15:10:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049051 00:28:35.245 15:10:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721049051 00:28:35.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049051_collect-vmstat.pm.log 00:28:35.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049051_collect-cpu-load.pm.log 00:28:35.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049051_collect-cpu-temp.pm.log 00:28:35.245 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721049051_collect-bmc-pm.bmc.pm.log 00:28:36.185 15:10:52 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:36.185 15:10:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:28:36.185 15:10:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:36.185 15:10:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:36.185 15:10:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:36.185 15:10:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:36.185 15:10:52 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:36.185 15:10:52 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:36.185 15:10:52 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:36.185 15:10:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:36.185 15:10:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:36.185 15:10:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:36.185 15:10:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:36.185 15:10:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:36.185 15:10:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:36.185 15:10:52 -- pm/common@44 -- $ pid=2012035 00:28:36.185 15:10:52 -- pm/common@50 -- $ kill -TERM 2012035 00:28:36.185 15:10:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:36.185 15:10:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:36.185 15:10:52 -- pm/common@44 -- $ pid=2012036 00:28:36.185 15:10:52 -- pm/common@50 -- $ kill -TERM 2012036 00:28:36.185 15:10:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:36.185 15:10:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:36.185 15:10:52 -- pm/common@44 -- $ pid=2012038 00:28:36.185 15:10:52 -- pm/common@50 -- $ kill -TERM 2012038 00:28:36.185 15:10:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:36.185 15:10:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:36.185 15:10:52 -- pm/common@44 -- $ pid=2012061 00:28:36.185 15:10:52 -- pm/common@50 -- $ sudo -E kill -TERM 2012061 00:28:36.185 + [[ -n 1488997 ]] 00:28:36.185 + sudo kill 1488997 00:28:36.457 [Pipeline] } 00:28:36.477 [Pipeline] // stage 00:28:36.482 [Pipeline] } 00:28:36.495 [Pipeline] // timeout 00:28:36.500 [Pipeline] } 00:28:36.517 [Pipeline] // catchError 00:28:36.523 [Pipeline] } 00:28:36.541 [Pipeline] // wrap 00:28:36.549 [Pipeline] } 00:28:36.566 [Pipeline] // catchError 00:28:36.575 [Pipeline] stage 00:28:36.577 [Pipeline] { (Epilogue) 00:28:36.593 [Pipeline] catchError 00:28:36.595 [Pipeline] { 00:28:36.611 [Pipeline] echo 00:28:36.613 Cleanup processes 00:28:36.622 [Pipeline] sh 00:28:36.912 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:36.912 2012148 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:28:36.912 2012588 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:36.926 [Pipeline] sh 00:28:37.232 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:37.232 ++ grep -v 'sudo pgrep' 00:28:37.232 ++ awk '{print $1}' 00:28:37.232 + sudo kill -9 2012148 00:28:37.245 [Pipeline] sh 00:28:37.532 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:47.535 [Pipeline] sh 00:28:47.823 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:47.823 Artifacts sizes are good 00:28:47.840 [Pipeline] archiveArtifacts 00:28:47.847 Archiving artifacts 00:28:48.003 [Pipeline] sh 00:28:48.289 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:28:48.307 [Pipeline] cleanWs 00:28:48.367 [WS-CLEANUP] Deleting project workspace... 00:28:48.367 [WS-CLEANUP] Deferred wipeout is used... 00:28:48.374 [WS-CLEANUP] done 00:28:48.376 [Pipeline] } 00:28:48.399 [Pipeline] // catchError 00:28:48.409 [Pipeline] sh 00:28:48.691 + logger -p user.info -t JENKINS-CI 00:28:48.700 [Pipeline] } 00:28:48.720 [Pipeline] // stage 00:28:48.725 [Pipeline] } 00:28:48.743 [Pipeline] // node 00:28:48.751 [Pipeline] End of Pipeline 00:28:48.902 Finished: SUCCESS